gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-114#paper-1307#slide-5
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-5
Confidence Metrics
Model is unconfident about Unsure about model parameters or structure Estimate reliably, but the entropy is large Input itself is unspecific/ambiguous, which would lead to several different correct outputs
Model is unconfident about Unsure about model parameters or structure Estimate reliably, but the entropy is large Input itself is unspecific/ambiguous, which would lead to several different correct outputs
[]
GEM-SciDuet-train-114#paper-1307#slide-6
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-6
Model Uncertainty
Token-level: avg log min{ Dropout as a Bayesian approximation (Yarin Gal, Zoubin 1. Inject noise to the model multiple times LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM
Token-level: avg log min{ Dropout as a Bayesian approximation (Yarin Gal, Zoubin 1. Inject noise to the model multiple times LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM
[]
GEM-SciDuet-train-114#paper-1307#slide-7
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-7
Data Uncertainty
(|): probability of input KenLM (Heafield et al., 2013) estimated on the training set Number of unknown words of input
(|): probability of input KenLM (Heafield et al., 2013) estimated on the training set Number of unknown words of input
[]
GEM-SciDuet-train-114#paper-1307#slide-8
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-8
Input Uncertainty
Variance of top candidates var Entropy of decoding log Approximated by Monte Carlo sampling
Variance of top candidates var Entropy of decoding log Approximated by Monte Carlo sampling
[]
GEM-SciDuet-train-114#paper-1307#slide-9
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-9
Confidence Scoring
Use logistic regression to fit F1 scores of outputs Model Uncertainty Data Uncertainty Input Uncertainty Dropout perturbation Gaussian noise Posterior probability Probability of input Number of unknown tokens Variance of top candidates Entropy of decoding
Use logistic regression to fit F1 scores of outputs Model Uncertainty Data Uncertainty Input Uncertainty Dropout perturbation Gaussian noise Posterior probability Probability of input Number of unknown tokens Variance of top candidates Entropy of decoding
[]
GEM-SciDuet-train-114#paper-1307#slide-10
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-10
Uncertainty Interpretation
Trace prediction uncertainty back to input words Users can verify or refine the input quickly Benefit the development cycle IF text me when its freezing Agreement of top-4 uncertain input words Between model prediction and gold standard
Trace prediction uncertainty back to input words Users can verify or refine the input quickly Benefit the development cycle IF text me when its freezing Agreement of top-4 uncertain input words Between model prediction and gold standard
[]
GEM-SciDuet-train-114#paper-1307#slide-11
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-11
Uncertainty Backpropagation
Initialize decoder's output neuron with uncertainty scores Obtain scores for input words LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM from child neurons Child() Contribution ratios from to its parent neurons are normalized to 1 Parent()
Initialize decoder's output neuron with uncertainty scores Obtain scores for input words LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM from child neurons Child() Contribution ratios from to its parent neurons are normalized to 1 Parent()
[]
GEM-SciDuet-train-114#paper-1307#slide-12
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-12
Backpropagation Rules
If contributes more to s value, ratio should be larger (i.e., backprop more from to
If contributes more to s value, ratio should be larger (i.e., backprop more from to
[]
GEM-SciDuet-train-114#paper-1307#slide-13
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-13
Experiments
IFTTT-style semantic parsing (Quirk et al., 2015) Archive your missed calls from Android to Google Drive Python code generation (Yin et al., 2017)
IFTTT-style semantic parsing (Quirk et al., 2015) Archive your missed calls from Android to Google Drive Python code generation (Yin et al., 2017)
[]
GEM-SciDuet-train-114#paper-1307#slide-14
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-14
Confidence Estimation
Spearman correlation ( ) between confidence score and F1 score Spearman correlation Posterior Conf Spearman correlation Posterior Conf Confidence scores are used as threshold to filter out uncertain examples
Spearman correlation ( ) between confidence score and F1 score Spearman correlation Posterior Conf Spearman correlation Posterior Conf Confidence scores are used as threshold to filter out uncertain examples
[]
GEM-SciDuet-train-114#paper-1307#slide-15
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-15
Importance of Confidence Metrics
Model Uncertainty Data Uncertainty Input Uncertainty
Model Uncertainty Data Uncertainty Input Uncertainty
[]
GEM-SciDuet-train-114#paper-1307#slide-16
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-16
Examples IFTTT
ATT: attention; BP: uncertainty backpropagation
ATT: attention; BP: uncertainty backpropagation
[]
GEM-SciDuet-train-115#paper-1308#slide-0
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-0
Distributional hypothesis
You shall know the meaning of the word by the company it keeps Words that occur in similar contexts tend to have similar meanings
You shall know the meaning of the word by the company it keeps Words that occur in similar contexts tend to have similar meanings
[]
GEM-SciDuet-train-115#paper-1308#slide-1
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-1
Cars Drivers Vehicles and Wheels
Words co-occur in text due to Paradigmatic relations (e.g., synonymy, hypernymy), but also due to Syntagmatic relations (e.g., selectional preferences) Distributional vectors conflate all types of association driver and car are not paradigmatically related Not synonyms, not antonyms, not hypernyms, not co-hyponyms, etc. But both words will co-occur frequently with driving, accident, wheel, vehicle, road, trip, race, etc.
Words co-occur in text due to Paradigmatic relations (e.g., synonymy, hypernymy), but also due to Syntagmatic relations (e.g., selectional preferences) Distributional vectors conflate all types of association driver and car are not paradigmatically related Not synonyms, not antonyms, not hypernyms, not co-hyponyms, etc. But both words will co-occur frequently with driving, accident, wheel, vehicle, road, trip, race, etc.
[]
GEM-SciDuet-train-115#paper-1308#slide-2
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-2
Vector specialization using external resources
Key idea: refine vectors using external resources Specializing vectors for semantic similarity Integrate external constraints into the learning objective Modify the pre-trained word embeddings using lexical constraints (+) Specialize the entire vocabulary (of the corpus) () Tailored for a specific embedding model () Specialize only the vectors of words found in external constraints (+) Applicable to any pre-trained embedding space (+) Much better performance than joint models (Mrksic et al., 2016)
Key idea: refine vectors using external resources Specializing vectors for semantic similarity Integrate external constraints into the learning objective Modify the pre-trained word embeddings using lexical constraints (+) Specialize the entire vocabulary (of the corpus) () Tailored for a specific embedding model () Specialize only the vectors of words found in external constraints (+) Applicable to any pre-trained embedding space (+) Much better performance than joint models (Mrksic et al., 2016)
[]
GEM-SciDuet-train-115#paper-1308#slide-3
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-3
This work
Best of both worlds Performance and flexibility of retrofitting models, while Specializing entire embedding spaces (vectors of all words) Learn an explicit retrofitting/specialization function Using external lexical constraints as training examples
Best of both worlds Performance and flexibility of retrofitting models, while Specializing entire embedding spaces (vectors of all words) Learn an explicit retrofitting/specialization function Using external lexical constraints as training examples
[]
GEM-SciDuet-train-115#paper-1308#slide-5
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-5
Explicit retrofitting
Constraints (synonyms and antonyms) used as training examples for learning the explicit specialization function Non-linear: Deep Feed-Forward Network (DFFN)
Constraints (synonyms and antonyms) used as training examples for learning the explicit specialization function Non-linear: Deep Feed-Forward Network (DFFN)
[]
GEM-SciDuet-train-115#paper-1308#slide-6
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-6
Constraints to training instances
Specialization function: x f(x) Distance function: g(x1, x2) (wi, wj, syn) embeddings as close as possible after specialization (wi, wj, ant) embeddings as far as possible after specialization (wi, wj) the non-costraint words stay at the same distance Micro-batches each constraint (wi, wj, r) paired with K pairs {(wi, wm k)}k wm k most similar to wi in distributional space Total: 2K+1 word pairs
Specialization function: x f(x) Distance function: g(x1, x2) (wi, wj, syn) embeddings as close as possible after specialization (wi, wj, ant) embeddings as far as possible after specialization (wi, wj) the non-costraint words stay at the same distance Micro-batches each constraint (wi, wj, r) paired with K pairs {(wi, wm k)}k wm k most similar to wi in distributional space Total: 2K+1 word pairs
[]
GEM-SciDuet-train-115#paper-1308#slide-9
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-9
Model Configuration
Distance function g: cosine distance DFFN activation function: hyperbolic tangent Constraints from previous work (Zhang et al, 14; Ono et al., 15) But only 57K unique words in these constraints! of micro-batches used for model validation K = 4 (micro-batch size = 9), batches of 100 micro-batches ADAM optimization (Kingma & Ba, 2015)
Distance function g: cosine distance DFFN activation function: hyperbolic tangent Constraints from previous work (Zhang et al, 14; Ono et al., 15) But only 57K unique words in these constraints! of micro-batches used for model validation K = 4 (micro-batch size = 9), batches of 100 micro-batches ADAM optimization (Kingma & Ba, 2015)
[]
GEM-SciDuet-train-115#paper-1308#slide-10
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-10
Intrinsic Evaluation
Important aspect: percentage of test words covered by constraints Comparison with Attract-Repel (Mrksic et al., 2017) SimLex, lexically disjoint SimLex, lexical overlap (99%) GloVe-CC fastText SGNS-W2 GloVe-CC fastText SGNS-W2 Distributional Attract-Repel Explicit retrofitting Distributional Attract-Repel Explicit retrofitting Synonymy and antonymy constraints contain of SL and SV words Performance is an optimistic estimate or true performance Realistic setting: downstream tasks Coverage of test set words by constraints between and
Important aspect: percentage of test words covered by constraints Comparison with Attract-Repel (Mrksic et al., 2017) SimLex, lexically disjoint SimLex, lexical overlap (99%) GloVe-CC fastText SGNS-W2 GloVe-CC fastText SGNS-W2 Distributional Attract-Repel Explicit retrofitting Distributional Attract-Repel Explicit retrofitting Synonymy and antonymy constraints contain of SL and SV words Performance is an optimistic estimate or true performance Realistic setting: downstream tasks Coverage of test set words by constraints between and
[]
GEM-SciDuet-train-115#paper-1308#slide-11
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-11
Donwstream tasks DST and LS
Dialog state tracking (DST) first component of a dialog system Neural Belief Tracker (NBT) (Mrksic et al., 17) Makes inferences purely based on an embedding space of words in NBT test set (Wen et al., 17) covered by specialization constraints Lexical simplification (LS) complex words to simpler synonyms Light-LS (Glavas & Stajner, 15) decisions purely based on an embedding space of LS dataset words (Horn et al., 14) found in specialization constraints Crucial to distinguish similarity from relatedness DST: cheap pub in the east vs. expensive restaurant in the west LS: Ferraris pilot Sebastian Vettel won the race., driver vs. airplane
Dialog state tracking (DST) first component of a dialog system Neural Belief Tracker (NBT) (Mrksic et al., 17) Makes inferences purely based on an embedding space of words in NBT test set (Wen et al., 17) covered by specialization constraints Lexical simplification (LS) complex words to simpler synonyms Light-LS (Glavas & Stajner, 15) decisions purely based on an embedding space of LS dataset words (Horn et al., 14) found in specialization constraints Crucial to distinguish similarity from relatedness DST: cheap pub in the east vs. expensive restaurant in the west LS: Ferraris pilot Sebastian Vettel won the race., driver vs. airplane
[]
GEM-SciDuet-train-115#paper-1308#slide-12
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-12
Downstream tasks Evaluation
Lexical simplification (LS) and Dialog state tracking (DST) GloVe-CC fastText SGNS-W2 GloVe-CC Distributional Attract-Repel Explirefit Distributional Attract-Repel Explirefit
Lexical simplification (LS) and Dialog state tracking (DST) GloVe-CC fastText SGNS-W2 GloVe-CC Distributional Attract-Repel Explirefit Distributional Attract-Repel Explirefit
[]
GEM-SciDuet-train-115#paper-1308#slide-14
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-14
Language transfer
Lexico-semantic resources such as WordNet needed to collect synonymy and antonymy constraints Idea: use shared bilingual embedding spaces to transfer the specialization to another language *Image taken from Lample et al., ICLR 2018 Most models learn a (simple) linear mapping
Lexico-semantic resources such as WordNet needed to collect synonymy and antonymy constraints Idea: use shared bilingual embedding spaces to transfer the specialization to another language *Image taken from Lample et al., ICLR 2018 Most models learn a (simple) linear mapping
[]
GEM-SciDuet-train-115#paper-1308#slide-15
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-15
Cross lingual transfer resuits
Transfer to three languages: DE, IT, and HR Different levels of proximity to English Variants of SimLex-999 exist for each of these three languages German (DE) Italian (IT) Croatian (HR) Distributional ExpliRefit (language transfer)
Transfer to three languages: DE, IT, and HR Different levels of proximity to English Variants of SimLex-999 exist for each of these three languages German (DE) Italian (IT) Croatian (HR) Distributional ExpliRefit (language transfer)
[]
GEM-SciDuet-train-115#paper-1308#slide-16
1308
Explicit Retrofitting of Distributional Word Vectors
Semantic specialization of distributional word vectors, referred to as retrofitting, is a process of fine-tuning word vectors using external lexical knowledge in order to better embed some semantic relation. Existing retrofitting models integrate linguistic constraints directly into learning objectives and, consequently, specialize only the vectors of words from the constraints. In this work, in contrast, we transform external lexico-semantic relations into training examples which we use to learn an explicit retrofitting model (ER). The ER model allows us to learn a global specialization function and specialize the vectors of words unobserved in the training data as well. We report large gains over original distributional vector spaces in (1) intrinsic word similarity evaluation and on (2) two downstream tasks -lexical simplification and dialog state tracking. Finally, we also successfully specialize vector spaces of new languages (i.e., unseen in the training data) by coupling ER with shared multilingual distributional vector spaces.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212 ], "paper_content_text": [ "Introduction Algebraic modeling of word vector spaces is one of the core research areas in modern Natural Language Processing (NLP) and its usefulness has been shown across a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016) .", "Commonly employed distributional models for word vector induction are based on the distributional hypothesis (Harris, 1954) , i.e., they rely on word co-occurrences obtained from large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014a; Levy et al., 2015; Bojanowski et al., 2017) .", "The dependence on purely distributional knowledge results in a well-known tendency of fusing semantic similarity with other types of semantic relatedness Schwartz et al., 2015) in the induced vector spaces.", "Consequently, the similarity between distributional vectors indicates just an abstract semantic association and not a precise semantic relation (Yih et al., 2012; Mohammad et al., 2013) .", "For example, it is difficult to discern synonyms from antonyms in distributional spaces.", "This property has a particularly negative effect on NLP applications like text simplification and statistical dialog modeling, in which discerning semantic similarity from other types of semantic relatedness is pivotal to the system performance (Glavaš anď Stajner, 2015; Faruqui et al., 2015; Mrkšić et al., 2016; Kim et al., 2016b) .", "A standard solution is to move beyond purely unsupervised learning of word representations, in a process referred to as word vector space specialization or retrofitting.", "Specialization models leverage external lexical knowledge from lexical resources, such as WordNet (Fellbaum, 1998) , the Paraphrase Database (Ganitkevitch et al., 2013) , or BabelNet (Navigli and Ponzetto, 2012) , to specialize distributional spaces for a particular lexical relation, e.g., synonymy (Faruqui et al., 2015; or hypernymy (Glavaš and Ponzetto, 2017) .", "External constraints are commonly pairs of words between which a particular relation holds.", "Existing specialization methods exploit the external linguistic constraints in two prominent ways: (1) joint specialization models modify the learning objective of the original distributional model by integrating the constraints into it (Yu and Dredze, 2014; Kiela et al., 2015; Nguyen et al., 2016, inter alia) ; (2) post-processing models fine-tune distributional vectors retroactively after training to satisfy the external constraints (Faruqui et al., 2015; Mrkšić et al., 2017, inter alia) .", "The latter, in general, outperform the former (Mrkšić et al., 2016) .", "Retrofitting models can be applied to arbitrary distributional spaces but they suffer from a major limitation -they locally update only vectors of words present in the external constraints, whereas vectors of all other (unseen) words remain intact.", "In contrast, joint specialization models propagate the external signal to all words via the joint objective.", "In this paper, we propose a new approach for specializing word vectors that unifies the strengths of both prior strategies, while mitigating their limitations.", "Same as retrofitting models, our novel framework, termed explicit retrofitting (ER), is applicable to arbitrary distributional spaces.", "At the same time, the method learns an explicit global specialization function that can specialize vectors for all vocabulary words, similar as in joint models.", "Yet, unlike the joint models, ER does not require expensive re-training on large text corpora, but is directly applied on top of any pre-trained vector space.", "The key idea of ER is to directly learn a specialization function in a supervised setting, using lexical constraints as training instances.", "In other words, our model, implemented as a deep feedforward neural architecture, learns a (non-linear) function which \"translates\" word vectors from the distributional space into the specialized space.", "We show that the proposed ER approach yields considerable gains over distributional spaces in word similarity evaluation on standard benchmarks Gerz et al., 2016) , as well as in two downstream tasks -lexical simplification and dialog state tracking.", "Furthermore, we show that, by coupling the ER model with shared multilingual embedding spaces (Mikolov et al., 2013a; Smith et al., 2017) , we can also specialize distributional spaces for languages unseen in the training data in a zero-shot language transfer setup.", "In other words, we show that an explicit retrofitting model trained with external constraints from one language can be successfully used to specialize the distributional space of another language.", "Related Work The importance of vector space specialization for downstream tasks has been observed, inter alia, for dialog state tracking Vulić et al., 2017b) , spoken language understanding (Kim et al., 2016b,a) , judging lexical entailment (Nguyen et al., 2017; Glavaš and Ponzetto, 2017; , lexical contrast modeling (Nguyen et al., 2016) , and cross-lingual transfer of lexical resources (Vulić et al., 2017a) .", "A common goal pertaining to all retrofitting models is to pull the vectors of similar words (e.g., synonyms) closer together, while some models also push the vectors of dissimilar words (e.g., antonyms) further apart.", "The specialization methods fall into two categories: (1) joint specialization methods, and (2) post-processing (i.e., retrofitting) methods.", "Methods from both categories make use of similar lexical resources -they typically leverage WordNet (Fellbaum, 1998) , FrameNet (Baker et al., 1998) , the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) , morphological lexicons (Cotterell et al., 2016) , or simple handcrafted linguistic rules (Vulić et al., 2017b) .", "In what follows, we discuss the two model categories.", "Joint Specialization Models.", "These models integrate external constraints into the distributional training procedure of general word embedding algorithms such as CBOW, Skip-Gram (Mikolov et al., 2013b ), or Canonical Correlation Analysis (Dhillon et al., 2015 .", "They modify the prior or the regularization of the original objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or integrate the constraints directly into the, e.g., an SGNS-or CBOW-style objective (Liu et al., 2015; Ono et al., 2015; Bollegala et al., 2016; Osborne et al., 2016; Nguyen et al., 2016 Nguyen et al., , 2017 .", "Besides generally displaying lower performance compared to retrofitting methods (Mrkšić et al., 2016) , these models are also tied to the distributional objective and any change of the underlying distributional model induces a change of the entire joint model.", "This makes them less versatile than the retrofitting methods.", "Post-Processing Models.", "Models from the popularly termed retrofitting family inject lexical knowledge from external resources into arbitrary pretrained word vectors (Faruqui et al., 2015; Rothe and Schütze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkšić et al., 2016) .", "These models fine-tune the vectors of words present in the linguistic constraints to reflect the ground-truth lexical knowledge.", "While the large majority of specialization models from both classes operate only with similarity constraints, a line of recent work (Mrkšić et al., 2016; Vulić et al., 2017b) demonstrates that knowledge about both similar and dissimilar words leads to improved performance in downstream tasks.", "The main shortcoming of the existing retrofitting models is their inability to specialize vectors of words unseen in external lexical resources.", "Our explicit retrofitting framework brings together desirable properties of both model classes: (1) unlike joint models, it does not require adaptation to the underlying distributional model and expensive re-training, i.e., it is applicable to any pre-trained distributional space; (2) it allows for easy integration of both similarity and dissimilarity constraints into the specialization process; and (3) unlike post-processors, it specializes the full vocabulary of the original distributional space and not only vectors of words from external constraints.", "Explicit Retrofitting Our explicit retrofitting (ER) approach, illustrated by Figure 1a , consists of two major components: (1) an algorithm for preparing training instances from external lexical constraints, and (2) a supervised specialization model, based on a deep feedforward neural network.", "This network, shown in Figure 1b learns a non-linear global specialization function from the training instances.", "From Constraints to Training Instances Let X = {x i } N i=1 , x i ∈ R d be the d-dimensional distributional vector space that we want to spe- cialize (with V = {w i } N i=1 referring to the associated vocabulary) and let X = {x i } N i=1 be the corresponding specialized vector space that we seek to obtain through explicit retrofitting.", "Let C = {(w i , w j , r) l } L l=1 be the set of L linguistic constraints from an external lexical resource, each consisting of a pair of vocabulary words w i and w j and a semantic relation r that holds between them.", "The most recent state-of-the-art retrofitting work Vulić et al., 2017b) suggests that using both similarity and dissimilarity constraints leads to better performance compared to using only similarity constraints.", "Therefore, we use synonymy and antonymy relations from external resources, i.e., r l ∈ {ant, syn}.", "Let g be the function measuring the distance between words w i and w j based on their vector representations.", "The algorithm for preparing training instances from constraints is guided by the following assumptions: 1.", "All synonymy pairs (w i , w j , syn) should have a minimal possible distance score in the spe-cialized space, i.e., g(x i , x j ) = g min ; 1 2.", "All antonymy pairs (w i , w j , ant) should have a maximal distance in the specialized space, i.e., g(x i , x j ) = g max ; 2 3.", "The distances g(x i , x k ) in the specialized space between some word w i and all other words w k that are not synonyms or antonyms of w i should be in the interval (g min , g max ).", "Our goal is to discern semantic similarity from semantic relatedness by comparing, in the specialized space, the distances between word pairs (w i , w j , r) ∈ C with distances that words w i and w j from those pairs have with other vocabulary words w m .", "It is intuitive to enforce that the synonyms are as close as possible and antonyms as far as possible.", "However, we do not know what the distances between non-synonymous and nonantonymous words g(x i , x m ) in the specialized space should look like.", "This is why, for all other words, similar to (Faruqui et al., 2016; , we assume that the distances in the specialized space for all word pairs not found in C should stay the same as in the distributional space: g(x i , x m ) = g(x i , x m ) .", "This way we preserve the useful semantic content available in the original distributional space.", "In downstream tasks most errors stem from vectors of semantically related words (e.g., car driver) being as similar as vectors of semantically similar words (e.g., carautomobile).", "To anticipate this, we compare the distances of pairs (w i , w j , r) ∈ C with the distances for pairs (w i , w m ) and (w j , w n ), where w m and w n are negative examples: the vocabulary words that are most similar to w i and w j , respectively, in the original distributional space X.", "Concretely, for each constraint (w i , w j , r) ∈ C we retrieve (1) K vocabulary words {w k m } K k=1 that are closest in the input distributional space (according to the distance function g) to the word w i and (2) K vocabulary words {w k n } K k=1 that are closest to the word w j .", "We then create, for each constraint (w i , w j , r) ∈ C, a corresponding set M (termed micro-batch) of 2K + 1 embedding pairs coupled with a corresponding distance in the input distributional space: External knowledge (bright, light, syn) (source, target, ant) (buy, acquire, syn) ... x' j =f(x j ) Distributional vector space acquire  [0.11, -0.23, ...,1.11] bright  [0.11, -0.23, ..., 1.11] buy  [-0.41, 0.29, ..., -1.07] ... target  [-1.7, 0.13, ..., -0.92] top  [-0.21, -0.52, ..., 0.47] ... Training instances (micro-batches) x' i =f(x i ) (b) Supervised specialization model Figure 1 : (a) High-level illustration of the explicit retrofitting approach: lexical constraints, i.e., pairs of synonyms and antonyms, are transformed into respective micro-batches, which are then used to train the supervised specialization model.", "(b) The low-level implementation of the specialization model, combining the non-linear embedding specialization function f , defined as the deep fully-connected feed-forward network, with the distance metric g, measuring the distance between word vectors after their specialization.", "M (wi, wj, r) = {(xi, xj, gr)} ∪ {(xi, x k m , g(xi, x k m ))} K k=1 ∪ {(xj, x k n , g(xj, x k n ))} K k=1 (1) with g r = g min if r = syn; g r = g max if r = ant.", "Non-Linear Specialization Function Our retrofitting framework learns a global explicit specialization function which, when applied on a distributional vector space, transforms it into a space that better captures semantic similarity, i.e., discerns similarity from all other types of semantic relatedness.", "We seek the optimal parameters θ of the parametrized function f ( x; θ) : R d → R d (where d is the dimensionality of the input space).", "The specialized embedding x i of the word w i is then obtained as x i = f (x i ; θ).", "The specialized space X is obtained by transforming distributional vectors of all vocabulary words, X = f (X; θ).", "We define the specialization function f to be a multi-layer fully-connected feed-forward network with H hidden layers and non-linear activations φ.", "The illustration of this network is given in Figure 1b .", "The i-th hidden layer is defined with a weight matrix W i and a bias vector b i : h i (x; θi) = φ h i−1 (x; θi−1)W i + b i (2) where θ i is the subset of network's parameters up to the i-th layer.", "Note that in this notation, x = h 0 (x; ∅) and x = f (x, θ) = h H (x; θ) .", "Let d h be the size of the hidden layers.", "The network's parameters are then as follows: W 1 ∈ R d×d h ; W i ∈ R d h ×d h , i ∈ {2, .", ".", ".", ", H − 1}; W H ∈ R d h ×d ; b i ∈ R d h , i ∈ {1, .", ".", ".", ", H − 1}; b H ∈ R d .", "Optimization Objectives We feed the micro-batches consisting of 2K + 1 training instances to the specialization model (see Section 3.1).", "Each training instance consists of a pair of distributional (i.e., unspecialized) embedding vectors x i and x j and a score g denoting the desired distance between the specialized vectors x i and x j of corresponding words w i and w j .", "Mean Square Distance Objective (ER-MSD).", "Let our training batch consist of N training instances, {(x i 1 , x i 2 , g i )} N i=1 .", "The simplest objective function is then the difference between the desired and obtained distances of specialized vectors: JMSD = N i=1 g(f (x i 1 ), f (x i 2 )) − g i 2 (3) By minimizing the MSD objective we simply force the specialization model to produce a specialized embedding space X in which distances between all synonyms amount to g min , distances between all antonyms amount to g max and distances between all other word pairs remain the same as in the original space.", "The MSD objective does not leverage negative examples: it only indirectly enforces that synonym (or antonym) pairs (w i , w j ) have smaller (or larger) distances than corresponding non-constraint word pairs (w i , w k ) and (w j , w k ).", "Contrastive Objective (ER-CNT).", "An alternative to MSD is to directly contrast the distances of constraint pairs (i.e., antonyms and synonyms) with the distances of their corresponding negative examples, i.e., the pairs from their respective microbatch (cf.", "Eq.", "(1) in Section 3.1).", "Such an objective should directly enforce that the similarity scores for synonyms (antonyms) (w i , w j ) are larger (or smaller, for antonyms) than for pairs (w i , w k ) and (w j , w k ) involving the same words w i and w j , respectively.", "Let S and A be the sets of microbatches created from synonymy and antonymy con- straints.", "Let M s = {(x i 1 , x i 2 , g i )} 2K+1 i=1 be one micro-batch created from one synonymy constraint and let M a be the analogous micro-batch created from one antonymy constraint.", "Let us then assume that the first triple (i.e., for i = 1) in every microbatch corresponds to the constraint pair and the remaining 2K triples (i.e., for i ∈ {2, .", ".", ".", ", 2K + 1}) to respective non-constraint word pairs.", "We then define the contrastive objective as follows: JCNT = Ms∈S 2K+1 i=2 (g i − gmin ) − (g i − g 1 ) 2 + Ma∈A 2K+1 i=2 (gmax − g i ) − (g 1 − g i ) 2 where g is a short-hand notation for the distance between vectors in the specialized space, i.e., g (x 1 , x 2 ) = g(x 1 , x 2 ) = g(f (x 1 ), f (x 2 )).", "Topological Regularization.", "Because the distributional space X already contains useful semantic information, we want our specialized space X to move similar words closer together and dissimilar words further apart, but without disrupting the overall topology of X.", "To this end, we define an additional regularization objective that measures the distance between the original vectors x 1 and x 2 and their specialized counterparts x 1 = f (x 1 ) and x 2 = f (x 2 ), for all examples in the training set: JREG = N i=1 g(x i 1 , f (x i 1 )) + g(x i 2 , f (x i 2 )) (4) We minimize the final objective function J = J + λJ REG .", "J is either J MSD or J CNT and λ is the regularization factor which determines how strictly we retain the topology of the original space.", "Experimental Setup Distributional Vectors.", "In order to estimate the robustness of the proposed explicit retrofitting procedure, we experiment with three different publicly available and widely used collections of pre-trained distributional vectors for English: (1) SGNS-W2 -vectors trained on the Wikipedia dump from the Polyglot project (Al-Rfou et al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b) , using the context windows of size 2; (2) GLOVE-CC -vectors trained with the GloVe (Pennington et al., 2014 ) model on the Common Crawl; and (3) FASTTEXT -vectors trained on Wikipedia with a variant of SGNS that builds word vectors by summing the vectors of their constituent character n-grams (Bojanowski et al., 2017) .", "Linguistic Constraints.", "We experiment with the sets of linguistic constraints used in prior work (Zhang et al., 2014; Ono et al., 2015) .", "These constraints, extracted from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), comprise a total of 1,023,082 synonymy word pairs and 380,873 antonymy word pairs.", "Although this seems like a large number of linguistic constraints, there is only 57,320 unique words in all synonymy and antonymy constraints combined, and not all of these words are found in the dictionary of the pre-trained distributional vector space.", "For example, only 15.3% of the words from constraints are found in the whole vocabulary of SGNS-W2 embeddings.", "Similarly, we find only 13.3% and 14.6% constraint words among the 200K most frequent words from the GLOVE-CC and FASTTEXT vocabularies, respectively.", "This low coverage emphasizes the core limitation of current retrofitting methods, being able to specialize only the vectors of words seen in the external constraints, and the need for our global ER method which can specialize all word vectors from the distributional space.", "ER Model Configuration.", "In all experiments, we set the distance function g to cosine distance: g(x 1 , x 2 ) = 1 − (x 1 · x 2 /( x 1 x 2 )) and use the hyperbolic tangent as activation, φ = tanh.", "For each constraint (w i , w j ), we create K = 4 corresponding negative examples for both w i and w j , resulting in micro-batches with 2K + 1 = 9 training instances.", "3 We separate 10% of the created micro-batches as the validation set.", "We then tune the hyper-parameter values, the number of hidden layers H = 5 and their size d h = 1000, and the topological regularization factor λ = 0.3 by minimizing the model's objective J on the validation set.", "We train the model in mini-batches, each containing N b = 100 constraints (i.e., 900 training instances, see above), using the Adam optimizer (Kingma and Ba, 2015) with initial learning rate set to 10 −4 .", "We use the loss on the validation set as the early stopping criteria.", "Results and Discussion Word Similarity Evaluation Setup.", "We first evaluate the quality of the explicitly retrofitted embedding spaces intrinsically, on two word similarity benchmarks: SimLex-999 dataset and SimVerb-3500 (Gerz et al., 2016) , a recent dataset containing human similarity ratings for 3,500 verb pairs.", "4 We use Spearman's ρ rank correlation between gold and predicted word pair scores as the evaluation metric.", "We evaluate the specialized embedding spaces in two settings.", "In the first setting, termed lexically disjoint, we remove from our training set all linguistic constraints that contain any of the words found in SimLex or SimVerb.", "This way, we effectively evaluate the model's ability to generalize the specialization function to unseen words.", "In the second setting (lexical overlap) we retain the constraints containing SimLex or SimVerb words in the training set.", "For comparison, we also report performance of the state-of-the-art local retrofitting model ATTRACT-REPEL , which is able to specialize only the words from the linguistic constraints.", "Results.", "The results with our ER model applied to three distributional spaces are shown in Table 1 .", "The scores suggest that the proposed ER model is universally useful and robust.", "The ER-specialized spaces outperform original distributional spaces across the board, for both objective functions.", "The results in the lexically disjoint setting are especially indicative of the improvements achieved by the ER.", "For example, we achieve a correlation gain of 18% for the GLOVE-CC vectors on SimLex using a specialization function learned without seeing a single constraint with any SimLex word.", "In the lexical overlap setting, we observe substantial gains only for GLOVE-CC.", "The modest gains in this setting with FASTTEXT and SGNS-W2 in fact strengthen the impression that the ER model learns a general specialization function, i.e., it does not \"overfit\" to words from linguistic constraints.", "The ER model with the contrastive objective (ER-CNT) yields better performance on average than the one using the simpler square distance objective (ER-MSD).", "This is expected, given that the contrastive objective enforces the model to distinguish pairs of semantically (dis)similar words from pairs of semantically related words.", "Finally, the post-processing ATTRACT-REPEL model based on local vector updates seems to substantially outperform the ER method in this task.", "The gap is especially visible for FASTTEXT and SGNS-W2 vectors.", "However, since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.", "ATTRACT-REPEL excels on the intrinsic evaluation as the constraints cover 99.2% of SimLex words and 99.9% of SimVerb words.", "However, its usefulness is less pronounced in real-life downstream scenarios in which such high coverage cannot be guaranteed, as demonstrated in Section 5.3.", "Analysis.", "We examine in more detail the performance of the ER model with respect to (1) the type of constraints used for training the model: synonyms and antonyms, only synonyms, or only antonyms and (2) the extent to which we retain the topology of the original distributional space (i.e., with respect to the value of the topological regularization factor λ).", "All reported results were obtained by specializing the GLOVE-CC distributional space in the lexically disjoint setting (i.e., employed constraints did not contain any of the SimLex or SimVerb words).", "In Table 2 we show the specialization performance of the ER-CNT models (H = 5, λ = 0.3), using different types of constraints on SimLex-999 (SL) and SimVerb-3500 (SV).", "We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and only antonym constraints, respectively.", "Clearly, we obtain the best specialization when combining synonyms and antonyms.", "Note, however, that using Setting: lexically disjoint Setting: lexical overlap GLOVE-CC FASTTEXT SGNS-W2 GLOVE-CC FASTTEXT SGNS-W2 SL SV SL SV SL SV SL SV SL SV SL SV Distributional (X) .", ".544 ER-Specialized (X = f (X)) ER- only synonyms or only antonyms also improves over the original distributional space.", "Next, in Figure 2 we depict the specialization performance (on SimLex and SimVerb) of the ER models with different values of the topology regularization factor λ (H fixed to 5).", "The best performance for is obtained for λ = 0.3.", "Smaller lambda values overly distort the original distributional space, whereas larger lambda values dampen the specialization effects of linguistic constraints.", "Language Transfer Readily available large collections of synonymy and antonymy word pairs do not exist for many languages.", "This is why we also investigate zeroshot specialization: we test if it is possible, with the help of cross-lingual word embeddings, to transfer the specialization knowledge learned from English constraints to languages without any training data.", "Evaluation Setup.", "We use the mapping model of Smith et al.", "(2017) to induce a multilingual vec- Table 3 : Spearman's ρ correlation scores for German, Italian, and Croatian embeddings in the transfer setup: the vectors are specialized using the models trained on English constraints and evaluated on respective language-specific SimLex-999 variants.", "tor space 6 containing word vectors of three other languages -German, Italian, and Croatian -along with the English vectors.", "7 Concretely, we map the Italian CBOW vectors (Dinu et al., 2015) , German FastText vectors trained on German Wikipedia (Bojanowski et al., 2017) , and Croatian Skip-Gram vectors trained on HrWaC corpus (Ljubešić and Erjavec, 2011) to the GLOVE-CC English space.", "We create the translation pairs needed to learn the projections by automatically translating 4,000 most frequent English words to all three other languages with Google Translate.", "We then employ the ER model trained to specialize the GLOVE-CC space using the full set of English constraints, to specialize the distributional spaces of other languages.", "We evaluate the quality of the specialized spaces on the respective SimLex-999 dataset for each language (Leviant and Reichart, 2015; .", "Results.", "The results are provided in Table 3 .", "They indicate that the ER models can substantially improve (e.g., by 13% for German vector space) over distributional spaces also in the language transfer setup without seeing a single constraint in the target language.", "These transfer results hold promise to support vector space specialization even for resource-lean languages.", "The more sophisticated contrastive ER-CNT model variant again outperforms the simpler ER-MSD variant, and it does so for all three languages, which is consistent with the findings from the monolingual English experiments (see Table 1 ).", "Downstream Tasks We now evaluate the impact of our global ER method on two downstream tasks in which differentiating semantic similarity from semantic relatedness is particularly important: lexical text simplification (LS) and dialog state tracking (DST).", "Lexical Text Simplification Lexical simplification aims to replace complex words -used less frequently and known to fewer speakers -with their simpler synonyms that fit into the context, that is, without changing the meaning of the original text.", "Because retaining the meaning of the original text is a strict requirement, complex words need to be replaced with semantically similar words, whereas replacements with semantically related words (e.g., replacing \"pilot\" with \"airplane\" in \"Ferrari's pilot won the race\") produce incorrect text which is more difficult to comprehend.", "Simplification Using Distributional Vectors.", "We use the LIGHT-LS lexical simplification algorithm of Glavaš andŠtajner (2015) which makes the word replacement decisions primarily based on semantic similarities between words in a distributional vector space.", "8 For each word in the input text LIGHT-LS retrieves most similar replacement candidates from the vector space.", "The candidates are then ranked according to several measures of simplicity and fitness for the context.", "Finally, the replacement is made if the top-ranked candidate is estimated to be simpler than the original word.", "By plugging-in vector spaces specialized by the ER model into LIGHT-LS, we hope to generate true synonymous candidates more frequently than with the unspecialized distributional space.", "Evaluation Setup.", "We evaluate LIGHT-LS on the LS dataset crowdsourced by Horn et al.", "(2014) .", "For each indicated complex word Horn et al.", "(2014) collected 50 manual simplifications.", "We use two evaluation metrics from prior work (Horn et al., 2014; Glavaš andŠtajner, 2015) to quantify the quality and frequency of word replacements: (1) accurracy (A) is the number of correct simplifications made (i.e., when the replacement made by the system is found in the list of manual replacements) divided by the total number of indicated complex words; and (2) change (C) is the percentage of indicated complex words that were replaced by the system (regardless of whether the replacement was correct).", "We plug into LIGHT-LS both unspecialized and specialized variants of three previously used English embedding spaces: GLOVE-CC, FASTTEXT, and SGNS-W2.", "Additionally, we again evaluate specializations of the same spaces produced by the state-of-the-art local retrofitting model ATTRACT-REPEL .", "Results and Analysis.", "The results with LIGHT-LS are summarized in Table 4 .", "ER-CNT model yields considerable gains over unspecialized spaces for both metrics.", "This suggests that the ER-specialized embedding spaces allow LIGHT-LS to generate true synonymous candidate replacements more often than with unspecialized spaces, and also verifies the importance of specialization for the LS task.", "Our ER-CNT model now also yields better results than ATTRACT-REPEL in a real-world downstream task.", "Only 59.6 % of all indicated complex words and manual replacement candidates from the LS dataset are now covered by the linguistic constraints.", "This accentuates the need to specialize the full distributional space in downstream applications as done by the ER model, while ATTRACT-REPEL is limited to local vector updates only of words seen in the constraints.", "By learning a global specialization function the proposed ER models seem more resilient to the observed drop in coverage of test words by linguistic constraints.", "Table 5 shows example substitutions of LIGHT-LS when using different embedding spaces: original GLOVE-CC space and its specializations obtained with ER-CNT and ATTRACT-REPEL.", "Dialog State Tracking Finally, we also evaluate the importance of explicit retrofitting in a downstream language understand- Table 6 : DST performance of GLOVE-CC embeddings specialized using explicit retrofitting.", "ing task, namely dialog state tracking (DST) (Henderson et al., 2014; Williams et al., 2016) .", "A DST model is typically the first component of a dialog system pipeline (Young, 2010) , tasked with capturing user's goals and updating the dialog state at each dialog turn.", "Similarly as in lexical simplification, discerning similarity from relatedness is crucial in DST (e.g., a dialog system should not recommend an \"expensive pub in the south\" when asked for a \"cheap bar in the east\").", "Evaluation Setup.", "To evaluate the impact of specialized word vectors on DST, we employ the Neural Belief Tracker (NBT), a DST model that makes inferences purely based on pre-trained word vectors .", "9 NBT composes word embeddings into intermediate utterance and context representations.", "For full model details, we refer the reader to the original paper.", "Following prior work, our DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset which contains 1,200 dialogs (600 training, 200 validation, and 400 test dialogs).", "We evaluate performance of the distributional and specialized GLOVE-CC embeddings and report it in terms of joint goal accuracy (JGA), a standard DST evaluation metric.", "All reported results are averages over 5 runs of the NBT model.", "Results .", "We show DST performance in Table 6 .", "The DST results tell a similar story like word similarity and lexical simplification results -the ER 9 https://github.com/nmrksic/neural-belief-tracker model substantially improves over the distributional space.", "With linguistic specialization constraints covering 57% of words from the WOZ dataset, ER model's performance is on a par with the ATTRACT-REPEL specialization.", "This further confirms our hypothesis that the importance of learning a global specialization for the full vocabulary in downstream tasks grows with the drop of the test word coverage by specialization constraints.", "Conclusion We presented a novel method for specializing word embeddings to better discern similarity from other types of semantic relatedness.", "Unlike existing retrofitting models, which directly update vectors of words from external constraints, we use the constraints as training examples to learn an explicit specialization function, implemented as a deep feedforward neural network.", "Our global specialization approach resolves the well-known inability of retrofitting models to specialize vectors of words unseen in the constraints.", "We demonstrated the effectiveness of the proposed model on word similarity benchmarks, and in two downstream tasks: lexical simplification and dialog state tracking.", "We also showed that it is possible to transfer the specialization to languages without linguistic constraints.", "In future work, we will investigate explicit retrofitting methods for asymmetric relations like hypernymy and meronymy.", "We also intend to apply the method to other downstream tasks and to investigate the zero-shot language transfer of the specialization function for more language pairs.", "ER code is publicly available at: https:// github.com/codogogo/explirefit." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5.1", "5.2", "5.3", "5.3.1", "5.3.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Explicit Retrofitting", "From Constraints to Training Instances", "Non-Linear Specialization Function", "Optimization Objectives", "Experimental Setup", "Word Similarity", "Language Transfer", "Downstream Tasks", "Lexical Text Simplification", "Dialog State Tracking", "Conclusion" ] }
GEM-SciDuet-train-115#paper-1308#slide-16
Conclusion
Retrofitting models specialize (i.e., fine-tune) distributional vectors for semantic similarity Shortcoming: specialize only vectors of words seen in external constraints Learning the specialization function using constrains as training examples Able to specialize distributional vectors of all words Good intrinsic (SL, SV) and downstream (DST, LS) performance Cross-lingual specialization transfer possible for languages
Retrofitting models specialize (i.e., fine-tune) distributional vectors for semantic similarity Shortcoming: specialize only vectors of words seen in external constraints Learning the specialization function using constrains as training examples Able to specialize distributional vectors of all words Good intrinsic (SL, SV) and downstream (DST, LS) performance Cross-lingual specialization transfer possible for languages
[]
GEM-SciDuet-train-116#paper-1313#slide-0
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-0
Constituency Parsing is Useful
Textual Entailment (Bowman et al., 2016) Semantic Parsing (Hopkins et al., 2017) Sentiment Analysis (Socher et al., 2013) Language Modeling (Dyer et al., 2016)
Textual Entailment (Bowman et al., 2016) Semantic Parsing (Hopkins et al., 2017) Sentiment Analysis (Socher et al., 2013) Language Modeling (Dyer et al., 2016)
[]
GEM-SciDuet-train-116#paper-1313#slide-2
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-2
But Target Domains Are Diverse
What's the second-most-used vowel in English? Ethoxycoumarin was metabolized by isolated epidermal cells via dealkylation to 7-hydroxycoumarin ( 7-OHC ) and subsequent conjugation.
What's the second-most-used vowel in English? Ethoxycoumarin was metabolized by isolated epidermal cells via dealkylation to 7-hydroxycoumarin ( 7-OHC ) and subsequent conjugation.
[]
GEM-SciDuet-train-116#paper-1313#slide-3
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-3
Performance Outside Source Domain
Parse geometry sentence with PTB trained parser aperimeter of 16 one side of
Parse geometry sentence with PTB trained parser aperimeter of 16 one side of
[]
GEM-SciDuet-train-116#paper-1313#slide-5
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-5
Relevant Recent Developments in NLP
Contextualized word representations improve sample Span-focused models achieve state-of-the-art constituency
Contextualized word representations improve sample Span-focused models achieve state-of-the-art constituency
[]
GEM-SciDuet-train-116#paper-1313#slide-6
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-6
Contributions
Show contextual word embeddings help domain adaptation. Adapt a parser using partial annotations. E.g., Increase correct geometry-domain parses by 23%.
Show contextual word embeddings help domain adaptation. Adapt a parser using partial annotations. E.g., Increase correct geometry-domain parses by 23%.
[]
GEM-SciDuet-train-116#paper-1313#slide-7
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-7
Review Contextual Word Representations
Parsing as Span Classification The Span Classification Model Performance on PTB and new Domains Adapting Using Partial Annotations
Parsing as Span Classification The Span Classification Model Performance on PTB and new Domains Adapting Using Partial Annotations
[]
GEM-SciDuet-train-116#paper-1313#slide-8
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-8
Contextualized Word Representations
ELMo trained on Billion Word Corpus
ELMo trained on Billion Word Corpus
[]
GEM-SciDuet-train-116#paper-1313#slide-9
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-9
Partial Annotations
Parsing as Span Classification The Span Classification Model
Parsing as Span Classification The Span Classification Model
[]
GEM-SciDuet-train-116#paper-1313#slide-10
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-10
Selectively Annotate Important Phenomena
A triangle has [a perimeter {of 16] and one side of length 4}.
A triangle has [a perimeter {of 16] and one side of length 4}.
[]
GEM-SciDuet-train-116#paper-1313#slide-11
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-11
Full Versus Partial Annotation
(S (NP A triangle) (VP has (NP (NP (NP a perimeter) (PP of A triangle has [a perimeter {of 16] and one side of length 4}.
(S (NP A triangle) (VP has (NP (NP (NP a perimeter) (PP of A triangle has [a perimeter {of 16] and one side of length 4}.
[]
GEM-SciDuet-train-116#paper-1313#slide-12
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-12
Partial Annotation Definition
Partial annotation is a labeled span. A triangle has [NP a perimeter of 16] and one side of length 4 .
Partial annotation is a labeled span. A triangle has [NP a perimeter of 16] and one side of length 4 .
[]
GEM-SciDuet-train-116#paper-1313#slide-13
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-13
Why Partial Annotations
Allowing annotators to selectively annotate important phenomena, makes the process faster and simpler.
Allowing annotators to selectively annotate important phenomena, makes the process faster and simpler.
[]
GEM-SciDuet-train-116#paper-1313#slide-15
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-15
Objective for Partial Annotation
Since we do not have a full parse, marginalize out components for which no supervision exists.
Since we do not have a full parse, marginalize out components for which no supervision exists.
[]
GEM-SciDuet-train-116#paper-1313#slide-16
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-16
One Solution Approximation
top k parses consistent with annotations
top k parses consistent with annotations
[]
GEM-SciDuet-train-116#paper-1313#slide-17
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-17
Our Solution Parsing as Span Classification
Assume probability of a parse factors into a product of probabilities. Objective now simplifies to: Easy if model classifies spans!
Assume probability of a parse factors into a product of probabilities. Objective now simplifies to: Easy if model classifies spans!
[]
GEM-SciDuet-train-116#paper-1313#slide-18
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-18
Parse Tree Labels All Spans
4) PRP VBZ S . . . . tennis input { She enjoys playing tennis *(Cross and Huang, 2016; Stern et al., 2017) Pd sen unswret eevee 32 for ARTIFICIAL INTELLIGENCE ALLENALORG: *(Cross and Huang, 2016; Stern et al., 2017) {2 Aue institute for ARTIFICIAL INTELLIGENCE ALLENALORG. *(Cross and Huang, 2016; Stern et al., 2017) ALLEN INSTITUTE for ARTIFICIAL INTELLIGENCE ALLENALORG. *(Cross and Huang, 2016; Stern et al., 2017) Ag. tsninsture, 41 for ARTIFICIAL INTELLIGENCE ALLENALORG. (NP) 1 S PRP VBZ S | (vp) g She enjoys |
4) PRP VBZ S . . . . tennis input { She enjoys playing tennis *(Cross and Huang, 2016; Stern et al., 2017) Pd sen unswret eevee 32 for ARTIFICIAL INTELLIGENCE ALLENALORG: *(Cross and Huang, 2016; Stern et al., 2017) {2 Aue institute for ARTIFICIAL INTELLIGENCE ALLENALORG. *(Cross and Huang, 2016; Stern et al., 2017) ALLEN INSTITUTE for ARTIFICIAL INTELLIGENCE ALLENALORG. *(Cross and Huang, 2016; Stern et al., 2017) Ag. tsninsture, 41 for ARTIFICIAL INTELLIGENCE ALLENALORG. (NP) 1 S PRP VBZ S | (vp) g She enjoys |
[]
GEM-SciDuet-train-116#paper-1313#slide-19
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-19
Training on Full and Partial Annotations
A partial annotation is a labeled span. A full parse labels every span in the sentence. Therefore, training on both is identical under our derived objective.
A partial annotation is a labeled span. A full parse labels every span in the sentence. Therefore, training on both is identical under our derived objective.
[]
GEM-SciDuet-train-116#paper-1313#slide-20
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-20
Parsing Using Span Classification Model
Find maximum using dynamic programming:
Find maximum using dynamic programming:
[]
GEM-SciDuet-train-116#paper-1313#slide-21
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-21
Summary
Partial annotations are labeled spans. Use a span classification model to parse. Training on partial and full annotations becomes identical.
Partial annotations are labeled spans. Use a span classification model to parse. Training on partial and full annotations becomes identical.
[]
GEM-SciDuet-train-116#paper-1313#slide-22
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-22
Model Architecture stern et al 2017
She enjoys playing tennis . :
She enjoys playing tennis . :
[]
GEM-SciDuet-train-116#paper-1313#slide-23
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-23
Differences
Objective Maximum likelihood on labels Maximum margin on trees POS Tags as Input No Yes
Objective Maximum likelihood on labels Maximum margin on trees POS Tags as Input No Yes
[]
GEM-SciDuet-train-116#paper-1313#slide-24
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-24
Experiments and Results
Learning Curve on New Domains Adapting Using Partial Annotations
Learning Curve on New Domains Adapting Using Partial Annotations
[]
GEM-SciDuet-train-116#paper-1313#slide-25
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-25
Performance on PTB
+Maximum Likelihood on Labels Ours
+Maximum Likelihood on Labels Ours
[]
GEM-SciDuet-train-116#paper-1313#slide-26
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-26
Question Bank Judge et al 2006
In contrast, PTB has few questions. Who is the author of the book, ``The Iron Lady: A Biography of Margaret Thatcher''?
In contrast, PTB has few questions. Who is the author of the book, ``The Iron Lady: A Biography of Margaret Thatcher''?
[]
GEM-SciDuet-train-116#paper-1313#slide-28
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-28
How Much Data Do We Need
From 0 to 100 parses
From 0 to 100 parses
[]
GEM-SciDuet-train-116#paper-1313#slide-29
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-29
Geometry Problems seo et al 2015
In the diagram at the right, circle O has a radius of 5, and CE = 2. Diameter AC is perpendicular to chord BD at E. What is the length of BD? Ethoxycoumarin was metabolized by isolated epidermal cells via dealkylation to 7-hydroxycoumarin ( 7-OHC ) and subsequent conjugation .
In the diagram at the right, circle O has a radius of 5, and CE = 2. Diameter AC is perpendicular to chord BD at E. What is the length of BD? Ethoxycoumarin was metabolized by isolated epidermal cells via dealkylation to 7-hydroxycoumarin ( 7-OHC ) and subsequent conjugation .
[]
GEM-SciDuet-train-116#paper-1313#slide-30
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-30
Setup
Annotator is a parsing expert. Annotated sentences randomly split into train and dev.
Annotator is a parsing expert. Annotated sentences randomly split into train and dev.
[]
GEM-SciDuet-train-116#paper-1313#slide-31
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-31
Biochemistry Annotations
610 partial annotations (Avg. 4.6 per sentence) In situ hybridization has revealed a striking subnuclear distribution of c-myc RNA transcripts Cell growth of neuroblastoma cells in serum containing medium was clearly diminished by inhibition of FPTase
610 partial annotations (Avg. 4.6 per sentence) In situ hybridization has revealed a striking subnuclear distribution of c-myc RNA transcripts Cell growth of neuroblastoma cells in serum containing medium was clearly diminished by inhibition of FPTase
[]
GEM-SciDuet-train-116#paper-1313#slide-32
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-32
What do partial annotations buy us
Correct Constituent % Error-Free Sentences % = PTB + Geo
Correct Constituent % Error-Free Sentences % = PTB + Geo
[]
GEM-SciDuet-train-116#paper-1313#slide-33
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-33
Geometry Annotations
379 partial annotations (Avg. 3 per sentence) What is the value of y + z Diameter AC is perpendicular to chord BD at E Find the measure of the angle designated by x
379 partial annotations (Avg. 3 per sentence) What is the value of y + z Diameter AC is perpendicular to chord BD at E Find the measure of the angle designated by x
[]
GEM-SciDuet-train-116#paper-1313#slide-35
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-35
Error Analysis on Geometry Training Set
19% right-attaching participial adjectives Eg: segment labeled x, the center indicated
19% right-attaching participial adjectives Eg: segment labeled x, the center indicated
[]
GEM-SciDuet-train-116#paper-1313#slide-36
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-36
Right Attaching Participial Adjective Error
Find the hypotenuse of the triangle labeled t.
Find the hypotenuse of the triangle labeled t.
[]
GEM-SciDuet-train-116#paper-1313#slide-37
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-37
Iterative Annotation Proof of Concept
Invent 3 sentences similar to the incorrect one: Find the hypotenuse of the triangle labeled t Given a circle with the tangent shown Examine the following diagram with the square
Invent 3 sentences similar to the incorrect one: Find the hypotenuse of the triangle labeled t Given a circle with the tangent shown Examine the following diagram with the square
[]
GEM-SciDuet-train-116#paper-1313#slide-39
1313
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F 1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], "paper_content_text": [ "Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on.", "The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive.", "In this paper, we revisit this issue in light of recent developments in neural natural language processing.", "Our paper rests on two observations: 1.", "It is trivial to train on partial annotations using a span-focused model.", "Stern et al.", "(2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance.", "We modify their parser, hence- forth MSP, so that it trains directly on individual labeled spans instead of parse trees.", "This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings.", "2.", "The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models.", "Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks.", "Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains.", "Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1 .", "The resulting parser also does not suffer any degradation on the newswire domain.", "Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F 1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set.", "A trained model is publicly available.", "1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on \"newswire-adjacent\" domains like the Brown corpus.", "• We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger).", "The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015) .", "Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc).", "There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective.", "Alternatively, we could just better align the model with the annotation task.", "Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g.", "whether a particular span is a constituent.", "This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case.", "Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable.", "In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al.", "(2017a) .", "RSP differs from MSP in the following ways: • It is trained on a span classification task.", "MSP trains on a maximum margin objective; that is, the loss function penalizes the 1 http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded.", "This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser.", "To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below.", "• It uses contextualized word representations instead of predicted part-of-speech tags.", "Our model uses contextualized word representations as described in Peters et al.", "(2018) .", "It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger.", "Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree.", "For instance, span (2, 4) in Figure 2b is labeled with the sequence S, VP , as shown in Figure 2a .", "• Every non-constituent is labeled with the empty sequence.", "Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤ i < j ≤ n}.", "Define a parse for sentence x as a function π : spans(x) → L where L is the set of all sequences of non-terminal tags, including the empty sequence.", "We model the probability of a parse as the independent product of its span labels: P r(π|x) = s∈spans(x) P r(π(s) | x, s) ⇒ log P r(π|x) = s∈spans(x) log P r(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: (Stern et al., 2017a) .", "Note that this probability model accords mass to mis-structured trees (e.g.", "overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree).", "We solve the following Integer Linear Program (ILP) 2 to find the highest scoring parse that admits a well-formed tree: max δ (i,j)∈spans(x) v + (i,j) δ (i,j) + v − (i,j) (1 − δ (i,j) ) subject to: i < k < j < m =⇒ δ (i,j) + δ (k,m) ≤ 1 (i, j) ∈ spans(x) =⇒ δ (i,j) ∈ {0, 1} where: v + (i,j) = max l s.t.", "l =∅ σ(l | x, (i, j)) v − (i,j) = σ(∅ | x, (i, j)) 2 There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree.", "However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well.", "Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a) , which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016) .", "First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by f i and b i respectively.", "Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences f j − f i−1 and b i − b j+1 .", "A one-layer feedforward network maps each span representation to a distribution over labels.", "Classification Model Parameters and Initializations We preserve the settings used in Stern et al.", "(2017a) where possible.", "As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250.", "The dropout ratio for the LSTM is set to 0.4 .", "Unlike the model it is based on, our model uses word embeddings of length 1124.", "These result from concatenating a 100 dimension learned word embedding, with a 1024 di- Parser Rec Prec F 1 RNNG (Dyer et al., 2016) --91.7 MSP (Stern et al., 2017a) 4 The split we used is not standard for part-of-speech tagging.", "As a result, we do not compare to part-of-speech taggers.", "Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain.", "When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) , it achieved F 1 scores between 83% and 86%, even though its F 1 score on WSJTEST was 92.1%.", "In Table 3 , we discover that RSP does not suffer nearly as much degradation, with an average F 1 -score of 90.3%.", "To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals.", "We used the Stanford tagger 5 (Toutanova et al., 2003) to tag WSJ-TRAIN and the Brown verticals so that MSP could be given these at train and test time.", "We learned that most of the improvement can be attributed to the ELMo word representations.", "In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP.", "Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text.", "It is primarily composed of well-formed sentences with similar syntactic phenomena.", "Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus.", "If we try to run RSP on a more syntactically divergent corpus like QuestionBank 6 (Judge et al., 2006) , we find much more performance degradation.", "This is unsurprising, since WSJTRAIN does not contain many examples of question syntax.", "But how many examples do we need, to get good performance?", "(Stern et al., 2017a) .", "Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a) .", "MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003) .", "Surprisingly, with only 50 annotated questions (see Table 4 ), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.", "This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.", "The resulting system improves slightly on WSJTEST getting 94.38%.", "On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005) , we see a similar, if somewhat less dramatic, trend.", "See Table 5 .", "With 50 annotated sentences, performance on GE-NIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky's thesis (McClosky, 2010) -the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed.", "That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN.", "These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort.", "We explore this possibility in the next section.", "Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions.", "• Replaced the identified expressions with dummy words.", "• Parsed the resulting sentences.", "Figure 3 : The top-level split for the development sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "before and after retraining RSP on 63 partially annotated geometry statements.", "• Substituted the regex-analyzed expressions for the dummy words in the parses.", "It is clear why this was necessary.", "Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence \"In the rhombus PQRS, PR = 24 and QS = 10.\"", "The result is completely wrong, and useless to a downstream application.", "Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the \"regex-and-replace\" strategy: 1.", "It assumes that each expression always maps to the same constituent label.", "Consider \"2x = 3y\".", "This is a verb phrase in the sentence \"In the above figure, x is prime and 2x = 3y.\"", "However, it is a noun phrase in the sentence \"The equation 2x = 3y has 2 solutions.\"", "If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances.", "2.", "It assumes that each expression is always a constituent.", "Suppose that we replace the expression \"AB < 30\" with a dummy word.", "This means we cannot properly parse a sentence like \"When angle AB < 30, the lines are parallel,\" because the constituent \"angle AB\" no longer exists in the resulting sentence.", "3.", "It does not handle other syntactic variation.", "As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like \"labeled x\" in the phrase \"the segment labeled x.\"", "Encouraging a parser to recognize this syntactic construct is out-of-scope for the \"regex-and-replace\" strategy.", "Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1 .", "Because RSP's model directly predicts span constituency, we can simply mark up a sentence with the \"tricky\" domain-specific constituents that the model will not already have learned from WSJTRAIN.", "For instance, we mark up NOUN-LABEL constructs like \"chord BD\", and equations like \"AD = 4\".", "From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like \"to chord BD\" in the third example) and the implied non-constituency of certain spans (like \"perpendicular to chord\" in the third example).", "We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown).", "We do not require annotators to provide span labels (although they can if desired).", "If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e.", "any label is ok).", "Experiments Geometry Questions We took the publicly available training data from (Seo et al., 2015) , split the data into sentences, and then annotated each sentence as in Figure 1 .", "Next, we randomly split these sentences into GEO-TRAIN and GEODEV 7 .", "After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV.", "In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence.", "After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJ-TRAIN sentences, plus all of GEOTRAIN.", "The results are in table 6.", "After fine-tuning, the model gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning 8 .", "Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%.", "With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser's performance on newswire.", "Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3 .", "For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories.", "First, approximately 44% have some mishandled math syntax, like failing to recognize \"dimensions 16 by 8\" as a constituent, or providing a flat structuring of the equation \"BAC = 1/4 * ACB\" (instead of recognizing \"1/4 * ACB\" as a subconstituent).", "Second, approximately 19% have PP-attachment errors.", "Third, another 19% fail to correctly analyze right-attaching participial adjectives like \"labeled x\" in the noun phrase \"the segment labeled x\" or \"indicated\" in the noun phrase \"the center indicated.\"", "This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples.", "For instance, while we have a training instance \"Find [ the measure of [ the angle designated by x ] ],\" it does not explicitly highlight the constituency of \"designated by x\".", "This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser's errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied.", "As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4 ), added them to GEOTRAIN, and then retrained.", "Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%.", "Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007) .", "We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences) 9 .", "In BIOCHEM-TRAIN, we made an average of 4.2 constituent declarations per sentence.", "We made no nonconstituent declarations.", "Again, we started with RSP trained on WSJ-TRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ-TRAIN sentences, plus all of BIOCHEMTRAIN.", "Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation.", "As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences.", "Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing.", "Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Daumé, 2007; Finkel and Manning, 2009 ).", "In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention.", "Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain.", "In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations.", "Co-training, self-training, and model combination are orthogonal to our approach.", "Our work is a spiritual successor to (Garrette and Baldridge, 2013) , which shows how to train a part-of-speech tagger with a minimal amount of annotation effort.", "Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing.", "(Li et al., 2016) provides a good overview.", "Here we highlight three important highlevel strategies.", "The first is \"complete-then-train\" (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013) , which \"completes\" every partially annotated de-pendency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations.", "These \"completed\" parses are then used to train a new parser.", "The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to \"complete-then-train,\" but integrates parse completion into the training process.", "At each iteration, new \"complete\" parses are created using the parser model from the most recent training iteration.", "The third strategy (Li et al., 2014 (Li et al., , 2016 transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation.", "Then, the training objective is modified to support optimization over these forests.", "Our work differs from these in two respects.", "First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs.", "Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data.", "While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing.", "These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG).", "Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations.", "Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers.", "Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent).", "This allows you to train with either full or partial annotations without any change to the training process.", "This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that \"parsers don't work outside of newswire.\"", "With a couple hours of effort (and a layman's understanding of syntactic building blocks), they can get significant performance improvements.", "We envision an iterative use case in which a user assesses a parser's errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.2", "4", "5.1", "5.2", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "The Reconciled Span Parser (RSP)", "Overview", "Classification Model", "Beyond Newswire", "Rapid Parser Extension", "Geometry Questions", "Biomedicine and Chemistry", "Related Work", "Domain Adaptation", "Learning from Partial Annotation", "Conclusion" ] }
GEM-SciDuet-train-116#paper-1313#slide-39
Conclusion
Recent developments make it much easier to train on partial annotations and build custom parsers. Making a few partial annotations can lead to significant
Recent developments make it much easier to train on partial annotations and build custom parsers. Making a few partial annotations can lead to significant
[]
GEM-SciDuet-train-117#paper-1314#slide-1
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-1
Latent tree learning
Recent work has shown that: Trees do not resemble any semantic or syntactic formalisms Parsing strategies are not consistent across random restarts These models fail to learn the simple context-free grammar
Recent work has shown that: Trees do not resemble any semantic or syntactic formalisms Parsing strategies are not consistent across random restarts These models fail to learn the simple context-free grammar
[]
GEM-SciDuet-train-117#paper-1314#slide-6
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-6
Optimization challenges
Size of the search space is For a sentence with 20 words, there are possible trees. Syntax and semantic has to be learnt simultaneously model has to infer from examples that [MIN 0 1] = 0 nonstationary environment (i.e the same sequence of actions can receive different rewards) Typically, the compositional function is learned faster than the parser This fast coadaptation limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training. High variance in the estimate of a parsers gradient has to be addressed. Learning paces of a parser and a compositional function is levelled off by controlling parsers updates using Proximal Policy High variance in the estimate of a parsers gradient is addressed by using self-critical training (SCT) baseline of Rennie et al. (2017).
Size of the search space is For a sentence with 20 words, there are possible trees. Syntax and semantic has to be learnt simultaneously model has to infer from examples that [MIN 0 1] = 0 nonstationary environment (i.e the same sequence of actions can receive different rewards) Typically, the compositional function is learned faster than the parser This fast coadaptation limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training. High variance in the estimate of a parsers gradient has to be addressed. Learning paces of a parser and a compositional function is levelled off by controlling parsers updates using Proximal Policy High variance in the estimate of a parsers gradient is addressed by using self-critical training (SCT) baseline of Rennie et al. (2017).
[]
GEM-SciDuet-train-117#paper-1314#slide-7
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-7
Variance reduction
the moving average of recent rewards self-critical training (SCT) baseline Rennie et al. (2017)
the moving average of recent rewards self-critical training (SCT) baseline Rennie et al. (2017)
[]
GEM-SciDuet-train-117#paper-1314#slide-8
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-8
Synchronizing syntax and semantics learning
Proximal Policy Optimization (PPO) of Schulman et al. (2017)
Proximal Policy Optimization (PPO) of Schulman et al. (2017)
[]
GEM-SciDuet-train-117#paper-1314#slide-11
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-11
Sentiment Analysis SST 2
Tree-LSTM RL-SPINN ST-Gumbel Ours 65
Tree-LSTM RL-SPINN ST-Gumbel Ours 65
[]
GEM-SciDuet-train-117#paper-1314#slide-13
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-13
Time and Space complexities
n sentence length d tree-LSTM dimensionality K number of updates in PPO
n sentence length d tree-LSTM dimensionality K number of updates in PPO
[]
GEM-SciDuet-train-117#paper-1314#slide-14
1314
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis. * Work done while the author was an intern at Facebook AI Research.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250 ], "paper_content_text": [ "Introduction Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990) .", "However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) , process text without imposing a grammatical structure.", "To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016) .", "These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.", "Indeed, parse tree level supervision requires a significant amount of annotations from expert lin-guists.", "These trees have been annotated with different goals in mind than the tasks we are using them for.", "Such discrepancy may result in a deterioration of the performance of models relying on them.", "Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "However, Williams et al.", "(2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.", "These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018) .", "In this work, we present an extension of Choi et al.", "(2018) , that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.", "Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.", "These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.", "The parser's goal is to generate a tree structure for the sentence.", "The compositional function follows this structure to produce the sentence representation.", "Our model contains a continuous component, the compositional function, and a discrete one, the parser.", "The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.", "Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.", "This typically leads to the \"coadaptation problem\" where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.", "In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.", "This is achieved by com-bining several recent advances in reinforcement learning.", "First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997) .", "Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "The code for our model is publicly available 1 .", "Preliminaries In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.", "Recursive Neural Networks A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996) .", "RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011) .", "Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.", "This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.", "This process is repeated recursively along the graph structure until the top-level nodes are reached.", "In this work, we assume that the compositional function is the same for every node in the graph.", "Tree-LSTM.", "We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al.", "(2015) and Zhu et al.", "(2015) .", "Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhuber (1997) to tree-structured topologies, i.e.,       z i f l f r o       =       tanh σ σ σ σ       R h l h r + b , c p = z i + c l f l + c r f r , h p = tanh(c p ) o, where σ and tanh are the sigmoid and hyperbolic tangent functions.", "Tree-LSTM cell is differentiable with respect to its recursion matrix R, bias b and its input.", "The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996) .", "Learning with RvNNs A tree-based RvNN is a function f θ parameterized by a d dimensional vector θ that predicts an output y given an input x and a tree t. Given a dataset D of N triplets (x, t, y), the parameters of the RvNN are learned with the following minimisation problem: min θ∈R d 1 N (x,t,y)∈D (f θ (x, t), y), (1) where is a logistic regression function.", "These models need an externally provided parsing tree for each input sentence during both training and evaluation.", "Alternatives, such as the shift-reducebased SPINN model of Bowman et al.", "(2016) , learn an internal parser from the given trees.", "While these solutions do not need external trees during evaluation, they still require tree level annotations for training.", "More recent work has focused on learning a latent parser with no direct supervision.", "Latent tree models Latent tree models aim at jointly learning the compositional function f θ and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018) .", "The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.", "The parameters of this tree distribution p φ (.|x) are represented by a vector φ.", "Given a dataset D of pairs of input sequences x and outputs y, the parameters θ and φ are jointly learned by minimising the following objective function: min θ,φ L(θ, φ) = 1 N (x,y) (E φ [f θ (x, t)], y), (2) where E φ is the expectation with respect to the p φ (.|x) distribution.", "Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.", "Hence, when is a convex function (e.g.", "cross entropy of an exponential family) usually an upper bound of Eq.", "(2) can be derived by applying Jensen's inequality: L(θ, φ) = 1 N (x,y) E φ [ (f θ (x, t), y)].", "(3) Learning a distribution over a set of discrete items involves a discrete optimisation scheme.", "For example, the RL-SPINN model of Yogatama et al.", "(2016) uses a mix of gradient descent for θ and REINFORCE for φ (Williams et al., 2018a) .", "Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.", "The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.", "Typically, the parameters θ are learned more rapidly than φ.", "This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.", "Gumbel Tree-LSTM In their Gumbel Tree-LSTM model, Choi et al.", "(2018) propose an alternative parsing strategy to avoid the coadaptation issue.", "Their parser incrementally merges a pair of consecutive constituents until a single one remains.", "This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.", "Each word i of the input sequence is represented by an embedding vector.", "A leaf transformation maps this vector to pair of vectors r 0 i =(h 0 i , c 0 i ).", "We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.", "The resulting representations form the initial states of the Tree-LSTM.", "In the absence of supervision, the tree is built in a bottomup fashion by recursively merging consecutive constituents (i, i + 1) based on merge-candidate scores.", "On each level k of the bottom-up derivation, the merge-candidate score of the pair (i, i+1) is computed as follow: s k (i) = q, Tree-LSTM(r k i , r k i+1 ) , where q is a trainable query vector and r k i is the constituent representation at position i after k mergings.", "We merge a pair (i * , i * + 1) sampled from the Categorical distribution built on the merge-candidate scores.", "The representations of the constituents are then updated as follow: r k+1 i =      r k i , i < i * , Tree-LSTM(r k i , r k i+1 ) i = i * , r k i+1 i > i * .", "This procedure is repeated until one constituent remains.", "Its hidden state is the input sentence representation.", "This procedure is non-differentiable.", "Choi et al.", "(2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013) .", "This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016) .", "This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018) .", "We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.", "Our model We consider the problem defined in Eq.", "(3) to jointly learn a composition function and an internal parser.", "Our model is composed of the parser of Choi et al.", "(2018) and the Tree-LSTM for the composition function.", "As suggested in past work Schulman et al., 2017) , we added an entropy H over the tree distribution to the objective function: min θ, φL (θ, φ) − λ x H(t | x), (4) where λ > 0.", "This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.", "The new objective function is differentiable with respect to θ, but not φ, the parameters of the parser.", "Learning θ follows the same procedure with BPTS as if the tree would be externally given.", "In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.", "Unbiased gradient estimation We cast the training of the parser as a reinforcement learning problem.", "The parser is an agent whose reward function is the negative of the loss function defined in Eq.", "(3).", "Its action space is the space of binary trees.", "The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p φ (t|x) = K k=0 π φ (a i k |r k ), (5) where r k = (r k 0 , .", ".", ".", ", r k K−k ).", "The loss function is optimised with respect to φ with REIN-FORCE (Williams, 1992) .", "REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.", "This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.", "We consider several extensions of REINFORCE to circumvent this problem.", "Variance reduction.", "An alternative solution to increasing the number of samples is the control variates method (Ross, 1997) .", "It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.", "Given an input-output pair (x, y) and tree t sampled from p φ (t|x) , let's define the random variable G as: G(t) = (f θ (x, t), y) ∂log p φ (t|x) ∂φ .", "(6) According to REINFORCE, calculating the gradient with respect to φ for the pair (x, y) is then equivalent to determining the unknown mean of the random variable G(t) 2 .", "Let's assume there is a control variate, i.e., a random variable b(t) that positively correlates with G and has known expected value with respect to p φ (.|x).", "Given N samples of the G(t) and the control variate b(t), the new gradient estimator is: G CV = E p φ (t|x) [b(t)] + 1 N N i=1 (G(t i ) − b(t i )) .", "A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997) : b(t) = c∇ φ log p φ (t|x).", "It has a zero mean under the p φ (.|x) distribution and it positively correlates with G(t).", "2 Note that while we are computing the gradients using , we could also directly optimise the parser with respect to downstream accuracy.", "Surrogate loss.", "REINFORCE often is implemented via a surrogate loss defined as follow: E t [r φ (t) (f θ (x, t), y)] , (7) whereÊ t is the empirical average over a finite batch of samples and r φ (t) = p φ (t|x) p φ old (t|x) is the probability ratio with φ old standing for the parameters before the update.", "Input-dependent baseline.", "The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.", "In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.", "This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.", "A solution is to make the baseline inputdependent.", "In particular, we use the self-critical training (SCT) baseline of Rennie et al.", "(2017) , defined as: b(t, x) = c θ,φ (x)∇ φ log p φ (t | x), where c θ,φ is the reward obtained with the policy used at test time, i.e.,t = arg max p φ (t|x).", "This control variate has a zero mean under the p φ (t|x) distribution and correlates positively with the gradients.", "Computing the arg max of a policy among all possible binary trees has exponential complexity.", "We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actionsâ k : a k = arg max π φ (a k |r k ).", "This approximation is very efficient and computing the baseline requires only one additional forward pass.", "Gradient normalization.", "We empirically observe significant fluctuations in the gradient norms.", "This creates instability that can not be reduced by additive terms, such as the inputdependent baselines.", "A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014) .", "This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.", "Synchronizing syntax and semantics learning with PPO The gradients of the loss function from the Eq.", "(4) are calculated using two different schemes, BPST for the composition function parameters θ and RE-INFORCE for the parser parameters φ.", "Then, both are updated with SGD.", "The estimate of the gradient with respect to φ has higher variance compared to the estimate with respect to θ.", "Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.", "It is φ parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.", "A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al.", "(2017) .", "It considers the next surrogate loss: E t max r φ (t) (f θ (x, t), y) , r c φ (t) (f θ (x, t), y) , Where r c φ (t) = clip (r φ (t), 1 − , 1 + ) and is a real number in (0; 0.5].", "The first argument of the max is the surrogate loss for REINFORCE.", "The clipped ratio in the second argument disincentivises the optimiser from performing updates resulting in large tree probability changes.", "With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar \"pace\" of learning between the parser and the compositional function.", "Related work Besides the works mentioned in Sec.", "2 and Sec.", "3, there is a vast literature on learning latent parsers.", "Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Mozer and Das, 1992) .", "More recently, new stackaugmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015) .", "In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002) .", "Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.", "Socher et al.", "(2011) uses a surrogate autoencoder objective to search for a constituency structure, merging nodes greedily based on the reconstruction loss.", "Maillard et al.", "(2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.", "A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.", "Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018) , where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.", "In contrast, Yogatama et al.", "(2016) learns a Shift-Reduce parser using reinforcement learning.", "Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.", "On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.", "Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017) .", "Due to the limited space, we will not discuss them in further details.", "Experiments We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018) , sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Technical details.", "For ListOps, we follow the experimental protocol of Nangia and Bowman (2018) , i.e., a 128 dimensional model and a tenway softmax classifier.", "However, we replace their multi-layer perceptron (MLP) by a linear classifier.", "The validation set is composed of 1k examples randomly selected from the training set.", "For SST and NLI, we follow the setup of Choi et al.", "(2018) : we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.", "The hyperparameters are selected on the validation set using 5 random seeds for each configuration.", "Our hyperparameters are the learning rate, weight decay, the regularisation parameter λ, the leaf transformations, variance reduction hyperpa- rameters and the number of updates K in PPO.", "We use an adadelta optimizer (Zeiler, 2012).", "ListOps The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018) .", "It is designed to have a single correct parsing strategy that a model must learn in order to succeed.", "It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.", "The sequences are made of integers in [0, 9] Table 2 , the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018) .", "They even achieve performance worse than purely sequential recurrent networks.", "On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.", "Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al.", "(2018) that could explain this gap in performance.", "In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.", "Impact of the baseline and PPO.", "We report the impact of our design choices on the performance in Table 1 .", "Our model without baseline nor PPO is vanilla REINFORCE.", "The baselines only improve performance when PPO is used.", "Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2 ).", "This confirms our expectations for models that fail to synchronise syntax and semantics learning.", "Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.", "The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.", "This shows the importance of this baseline for the stability of our approach.", "Sensitivity to hyperparameters.", "Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.", "For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.", "However, we have observed that the choice of the optimiser has a significant impact.", "For example, the average performance drops to 73.0% if we replace Adadelta by Adam (Kingma and Ba, 2014 ).", "Yet, the maximum value out of 5 runs remains relatively high, 99.0%.", "Untied parameters.", "As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.", "Without this separation between syntax and semantics, it would be impossible to update one module with- out changing the other.", "The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.", "We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64.7%.", "Extrapolation and Grammaticality.", "Recursive models have the potential to generalise to any sequence length.", "Our model was trained with sequences of length up to 130 tokens.", "We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000.", "As shown in Fig.1 , our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.", "On the other hand, we notice that final representations produced by the parser are very similar to each other.", "Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.", "There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.", "To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.", "As shown in Figure 2 , in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: \"Happy families are all alike; every unhappy family is unhappy in its own way.\"", "The only exception, marked by a mode near 1, come from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.", "This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.", "These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.", "Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99.99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.", "Interestingly, we have not observed a similar signal from the vectors generated by the composition function.", "Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75%.", "This suggests that most of the syntactic information is captured by the parser, not the composition function.", "Natural Language Inference We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.", "Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.", "The task can be formulated as a three-way classification problem.", "The results are shown in Tables 3 and 4 .", "When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de- velopment and test sets.", "Surprisingly, two out of four models for MultiNLI task collapsed to leftbranching parsing strategies.", "This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1, which were determined to be optimal via hyperparameter optimisation.", "As with ListOps, using an Adadelta optimizer significantly improves the training of the model.", "Sentiment Analysis We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al.", "(2013) .", "All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.", "There are two versions of the dataset, with either binary labels, \"negative\" or \"positive\", (SST-2) or five labels, representing fine-grained sentiments (SST-5).", "As shown in Ta- ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.", "We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.", "This indicates that the algorithm does not prefer any specific grammar.", "Indeed, generated trees are very similar to balanced ones.", "This result is in line with Shi et al.", "(2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.", "We also compare with state-of-the-art sequence-based models.", "For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.", "Nonetheless, they outperform recursive models by a significant margin.", "Performance on these datasets is more impacted by pre-training than by learning the syntax.", "It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.", "Conclusion In this paper, we have introduced a novel model for learning latent tree parsers.", "Our approach relies on a separation between syntax and semantics.", "This allows dedicated optimisation schemes for each module.", "In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.", "When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.", "For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al.", "(2018) .", "Additionally, our approach performs competitively on several real natural language tasks.", "In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017) .", "Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.3.1", "3", "3.1", "3.2", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "Recursive Neural Networks", "Learning with RvNNs", "Latent tree models", "Gumbel Tree-LSTM", "Our model", "Unbiased gradient estimation", "Synchronizing syntax and semantics learning with PPO", "Related work", "Experiments", "ListOps", "Natural Language Inference", "Sentiment Analysis", "Conclusion" ] }
GEM-SciDuet-train-117#paper-1314#slide-14
Conclusions
The separation between syntax and semantics allows coordination between optimisation schemes for each module. Self-critical training mitigates credit assignment problem by distinguishing hard and easy to solve datapoints. The model can recover a simple context-free grammar of mathematical expressions. The model performs competitively on several real natural language tasks.
The separation between syntax and semantics allows coordination between optimisation schemes for each module. Self-critical training mitigates credit assignment problem by distinguishing hard and easy to solve datapoints. The model can recover a simple context-free grammar of mathematical expressions. The model performs competitively on several real natural language tasks.
[]
GEM-SciDuet-train-118#paper-1321#slide-0
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-0
Location Lost in Translation
S ocial Media Society
S ocial Media Society
[]
GEM-SciDuet-train-118#paper-1321#slide-1
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-1
Applications Public Health Monitoring
Allergy Rates (Paul and Dredze, 2011)
Allergy Rates (Paul and Dredze, 2011)
[]
GEM-SciDuet-train-118#paper-1321#slide-2
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-2
Applications Emergency Situation Awareness Bushfires Floods and Earthquakes
Fight bushfire with #fire: Alert hospital before anybody calls
Fight bushfire with #fire: Alert hospital before anybody calls
[]
GEM-SciDuet-train-118#paper-1321#slide-3
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-3
Location Location Location
Profile field is noisy (Hecht et. al, 2011), GPS data is scarce (Hecht and Stephens, 2014), and biased toward younger urban users (Pavalanathan and Eisenstein, 2015)
Profile field is noisy (Hecht et. al, 2011), GPS data is scarce (Hecht and Stephens, 2014), and biased toward younger urban users (Pavalanathan and Eisenstein, 2015)
[]
GEM-SciDuet-train-118#paper-1321#slide-4
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-4
Geolocation The three Ls
Aint this place a geographical oddity; two weeks away from everywhere! User geolocation is the task of identifying the home location of a social media user using contextual information such as geographical variation in language use and in social interactions.
Aint this place a geographical oddity; two weeks away from everywhere! User geolocation is the task of identifying the home location of a social media user using contextual information such as geographical variation in language use and in social interactions.
[]
GEM-SciDuet-train-118#paper-1321#slide-5
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-5
Previous Work not exhaustive
Text-based Supervised Classification Network-based Semi-supervised Regression No Text Joint/Hybrid Text+Network No Network Do et al. (2017) Dont utilise unlabelled text data Miura et al. (2017) Our work: Text+Network Semi-supervised Geolocation
Text-based Supervised Classification Network-based Semi-supervised Regression No Text Joint/Hybrid Text+Network No Network Do et al. (2017) Dont utilise unlabelled text data Miura et al. (2017) Our work: Text+Network Semi-supervised Geolocation
[]
GEM-SciDuet-train-118#paper-1321#slide-7
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-7
Discretisation of Labels
Cluster continuous lat/lon: cluster ids are labels. Use the median training point of the predicted region as the inal f continuous prediction. Evaluate using Mean and Median errors between the known and the predicted coordinates.
Cluster continuous lat/lon: cluster ids are labels. Use the median training point of the predicted region as the inal f continuous prediction. Evaluate using Mean and Median errors between the known and the predicted coordinates.
[]
GEM-SciDuet-train-118#paper-1321#slide-8
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-8
Text and Network Views of Data
Karin Mark Steven Trevor Normalised Adj. Matrix: A Text BoW: X Two users are connected if they have a common @-mention.
Karin Mark Steven Trevor Normalised Adj. Matrix: A Text BoW: X Two users are connected if they have a common @-mention.
[]
GEM-SciDuet-train-118#paper-1321#slide-9
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-9
Baseline 1 FeatConcat
Concatenate A and X , and feed them to a DNN: The dimensions of A, and consequently the number of parameters grow with the number of samples.
Concatenate A and X , and feed them to a DNN: The dimensions of A, and consequently the number of parameters grow with the number of samples.
[]
GEM-SciDuet-train-118#paper-1321#slide-10
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-10
Baseline 2 DCCA
FC linear FC softmax FC sigmoid FC ReLU X : text BoW A: Neighbours Unsupervised DCCA Supervised Geolocation Learn a shared representation using Deep Canonical Correlation
FC linear FC softmax FC sigmoid FC ReLU X : text BoW A: Neighbours Unsupervised DCCA Supervised Geolocation Learn a shared representation using Deep Canonical Correlation
[]
GEM-SciDuet-train-118#paper-1321#slide-11
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-11
Proposed Model GCN
H l1 Highway GCN: , Adding more layers results in expanded neighbourhood smoothing: control with highway gates W lh, bl h
H l1 Highway GCN: , Adding more layers results in expanded neighbourhood smoothing: control with highway gates W lh, bl h
[]
GEM-SciDuet-train-118#paper-1321#slide-12
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-12
Highway GCN Control Neighbourhood Smoothing
layer gates: T (~hl) = W lh ~hl blh weighted sum of layer input and output
layer gates: T (~hl) = W lh ~hl blh weighted sum of layer input and output
[]
GEM-SciDuet-train-118#paper-1321#slide-13
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-13
Neighbourhood Smoothing
Karin Mark Steven Tim Trevor Normalised Adj. Matrix: A Text BoW: X Smoothing immediate neighbourhood: A X smoothing expanded neighbourhood: A A X
Karin Mark Steven Tim Trevor Normalised Adj. Matrix: A Text BoW: X Smoothing immediate neighbourhood: A X smoothing expanded neighbourhood: A A X
[]
GEM-SciDuet-train-118#paper-1321#slide-14
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-14
Sample Representation using t SNE
FeatConcat [X, A] DCCA GCN A X GCN A A X
FeatConcat [X, A] DCCA GCN A X GCN A A X
[]
GEM-SciDuet-train-118#paper-1321#slide-15
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-15
Test Results Median Error
Median Error in km
Median Error in km
[]
GEM-SciDuet-train-118#paper-1321#slide-16
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-16
Top Features Learnt from Unlabelled Data 1 Supervision
Seattle, WA Austin, TX Jacksonville, FL Columbus, OH #goseahawks smock traffuck ferran promissory chowdown ckrib #meatsweats lanterna pupper effaced #austin lmfbo unf ribault wahoowa wjct fscj floridian Top terms for a few regions detected by GCN using only 1% of Twitter-US for supervision. The terms that existed in labelled data are removed.
Seattle, WA Austin, TX Jacksonville, FL Columbus, OH #goseahawks smock traffuck ferran promissory chowdown ckrib #meatsweats lanterna pupper effaced #austin lmfbo unf ribault wahoowa wjct fscj floridian Top terms for a few regions detected by GCN using only 1% of Twitter-US for supervision. The terms that existed in labelled data are removed.
[]
GEM-SciDuet-train-118#paper-1321#slide-17
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-17
Dev Results How much labelled data do we really have
labelled data (%samples) labelled data (%samples) median error (km) Median Error in km Joint DCCA 1% Joint FeatConcat 1% Joint GCN 1% labelled data (%samples) GeoText TwitterUS TwitterWorld Twitter-World Test results with 1% labelled data
labelled data (%samples) labelled data (%samples) median error (km) Median Error in km Joint DCCA 1% Joint FeatConcat 1% Joint GCN 1% labelled data (%samples) GeoText TwitterUS TwitterWorld Twitter-World Test results with 1% labelled data
[]
GEM-SciDuet-train-118#paper-1321#slide-18
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-18
Confusion Matrix Between True Location and Predicted Location
NC SC WV OH FL GA MI Users from smaller states are misclassified in nearby larger KY IN AL states such as TX, NY, CA, and OH. MS LA MO AR Users from FL are misclassified in several other states possibly MN IA KS because they are not born in FL, and are well connected to NE OK TX their hometowns in other states. SD ND WY CO NM UT MT AZ ID NV CA WA OR ME MA RI NH VT CT NY NJ DE MD PA VA NC SC WV OH FL GA MI KY IN AL TN WI IL MS LA MO AR MN IA KS NE OK TX SD ND WY CO NM UT MT AZ ID NV CA WA OR Predicted
NC SC WV OH FL GA MI Users from smaller states are misclassified in nearby larger KY IN AL states such as TX, NY, CA, and OH. MS LA MO AR Users from FL are misclassified in several other states possibly MN IA KS because they are not born in FL, and are well connected to NE OK TX their hometowns in other states. SD ND WY CO NM UT MT AZ ID NV CA WA OR ME MA RI NH VT CT NY NJ DE MD PA VA NC SC WV OH FL GA MI KY IN AL TN WI IL MS LA MO AR MN IA KS NE OK TX SD ND WY CO NM UT MT AZ ID NV CA WA OR Predicted
[]
GEM-SciDuet-train-118#paper-1321#slide-19
1321
Semi-supervised User Geolocation via Graph Convolutional Networks
Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135 ], "paper_content_text": [ "Introduction User geolocation, the task of identifying the \"home\" location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010) , which use social media as an implicit information resource about people.", "Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users.", "Third-party service providers don't have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011) , or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013) .", "The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data.", "Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user-user interactions (Davis Jr et al., 2011; Jurgens, 2013) .", "Both text and network views are critical in geolocating users.", "Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation.", "Other users have many local social interactions, and mostly use social media to read other people's comments, and for interacting with friends.", "Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present.", "There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017) , or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017) .", "Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario.", "In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location.", "Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper-forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.", "1 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (\"GCN\": Kipf and Welling (2017) ).", "We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013) .", "Multivew Geolocation Let X ∈ R |U |×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈ 1 |U |×|U | be the network view, encoding user-user interactions.", "We partition U = U S ∪ U H into a supervised and heldout (unlabelled) set, U S and U H , respectively.", "The goal is to infer the location of unlabelled samples Y U , given the location of labelled samples Y S , where each location is encoded as a one-hot classification label, y i ∈ 1 c with c being the number of target regions.", "GCN GCN defines a neural network model f (X, A) with each layer: =D − 1 2 (A + λI)D − 1 2 H (l+1) = σ  H (l) W (l) + b , (1) whereD is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017) ; H 0 = X and the d in × d out matrix W (l) and d out × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity.", "The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights inÂ, and performs a linear transformation using W and b followed by a nonlinear activation function (σ).", "In other words, for user u i , the output of layer l is computed by: h l+1 i = σ j∈nhood(i) ij h l j W l + b l , (2) 1 Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: Figure 1 : The architecture of GCN geolocation model with layer-wise highway gates (W i h , b i h ).", "GCN is applied to a BoW model of user content over the @-mention graph to predict user location.", "X = BoWtext   A tanh tanh softmax H 0 H 1 H l−1 H l predict location:ŷ W l−1 , b l−1 , W l−1 h , b l−1 h W 1 , b 1 , W 1 h , b 1 h W l , b l where W l and b l are learnable layer parameters, and nhood(i) indicates the neighbours of user u i .", "Each extra layer in GCN extends the neighbourhood over which a sample is smoothed.", "For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size.", "Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members.", "To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks.", "In highway networks (Srivastava et al., 2015) , the output of a layer is summed with its input with gating weights T ( h l ): DCCA Given two views X and (from Equation 1) of data samples, CCA (Hotelling, 1936) , and its deep version (DCCA) (Andrew et al., 2013) learn functions f 1 (X) and f 2 (Â) such that the correlation between the output of the two functions is maximised: ρ = corr(f 1 (X), f 2 (Â)) .", "(4) The resulting representations of f 1 (X) and f 2 (Â) are the compressed representations of the two views where the uncorrelated noise between them is reduced.", "The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks.", "In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f 1 and f 2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ 12 W 2 ) subject to: W T 1 Σ 11 W 1 = W T 2 Σ 22 W 2 = I (5) where Σ 11 and Σ 22 are the covariances of the two outputs, and Σ 12 is the cross-covariance.", "The weights W 1 and W 2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost.", "The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections.", "After training, the two networks are used to predict new projections for unseen data.", "The two projections of unseen data -the outputs of the two networks -are then concatenated to form a multiview sample representation, as shown in Figure 2 .", "3 Experiments Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010) , Figure 2 : The DCCA model architecture: First the two text and network views X and are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations.", "sets.", "Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD.", "GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively.", "The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al.", "(2012) , with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Constructing the Views We build matrix as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (A ij = 1) if one mentions the other, or they co-mention another user.", "The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l 2 normalisation of samples.", "Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node.", "We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTER-WORLD respectively.", "Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4.", "The k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set.", "The architecture of GCN-LP is similar, with the difference that the text view is set to zero.", "In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets.", "The loss function is CCA loss, which maximises the output correlations.", "The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets.", "We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location.", "Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user's neighbours, which are then convolved with their k-hop neighbours using the GCN.", "This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius.", "Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2.", "MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content's bag-of-words and (Equation 1), which represents the network view as a vector input.", "For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively.", "Results and Analysis Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways.", "Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013) .", "The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together.", "GCN, on the other hand, uses graph convolution.", "The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3 .", "As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views.", "GCN, on the other hand, substantially improves the representations.", "Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily.", "Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%).", "The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g.", "non-geotagged) tweets are used for training.", "The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance.", "Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models.", "We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD.", "Note that the text and network view, and the development set, remain fixed for all the experiments.", "As shown in Figure 4 , when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them.", "When enough training data is available (e.g.", "more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the interactions between network and text views.", "When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTER-WORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN.", "Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET.", "In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN.", "(a) MLP-TXT+NET (b) DCCA (c) 1 GCN · X (d) 2 GCN ·Â · X Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used.", "We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn't change that much as gates are allowing the layer inputs to pass through the network without much change.", "The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6 .", "Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1.", "The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b) , a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017) .", "MLP-TXT+NET and GCN outperform all the text-or network-only models, and also the hybrid model of Rahimi et al.", "(2017b) , indicating that joint modelling of text and network features is important.", "MLP-TXT+NET is competitive with Do et al.", "(2017) , outperforming it on larger datasets, and underperforming on GEO- Table 1 : Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP.", "The models are compared with text-only and network-only methods.", "The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled.", "\"-\" signifies that no results were reported for the given metric or dataset.", "Note that Do et al.", "(2017) Rahimi et al.", "(2015a) , which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised.", "Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1) , under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA.", "GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient.", "Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn't mean that the distribution of errors is uniform over all locations.", "Big cities often attract more local online discussions, making the geolocation of users in those areas simpler.", "For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events.", "It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters.", "The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7 .", "The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material.", "To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data.", "There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surpris-ingly OH.", "Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY.", "The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there.", "Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF.", "Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL.", "The full confusion matrix between the U.S. states is provided in the supplementary material.", "Local Terms In Table 2 , local terms of a few regions detected by GCN under minimal supervision are shown.", "The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary.", "For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data.", "The convolution over the social graph is able to utilise such terms that don't exist in the labelled data.", "Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches.", "Text-based geolocation uses the geographical bias in language use to infer the location of users.", "There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010) ; (2) geographical topic models that learn region-specific topics, but don't scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013) ; and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a) .", "Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario.", "We present the terms that were present only in unlabelled data.", "The terms include city names, hashtags, food names and internet abbreviations.", "Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other.", "There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods.", "Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014) , supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015) , and graph-based label-propagation models propagate the location information through the user-user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014) .", "Node embedding methods build heterogeneous graphs between user-user, user-location and locationlocation, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes.", "The embeddings are then used in supervised models for geolocation (Wang et al., 2017) .", "Network-based models fail to geolocate disconnected users: Jurgens et al.", "(2015) couldn't geolocation 37% of users as a result of disconnectedness.", "Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user-user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a) ; (2) ensembling separately trained text-and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017) ; and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017) , which can capture the complementary information in text and network views, and also model the interactions between the two.", "None of the previous multiview approaches -with the exception of Li et al.", "(2012a) and Li et al.", "(2012b) that only use toponyms -effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user-user graph.", "There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017) , previous models don't jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions.", "Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting.", "We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information.", "We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data.", "We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily.", "In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g.", "using user-user gates)." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "5", "6" ], "paper_header_content": [ "Introduction", "Model", "Multivew Geolocation", "GCN", "Highway GCN", "DCCA", "Data", "Constructing the Views", "Model Selection", "Baselines", "Representation", "Labelled Data Size", "Highway Gates", "Performance", "Error Analysis", "Local Terms", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-118#paper-1321#slide-19
Conclusion
Simple concatenation in FeatConcat is a strong baseline with large amounts of labelled data. GCN performs well with both large and small amounts of labelled data by effectively using unlabelled data. Gating mechanisms (e.g. highway gates) are essential for controlling neighbourhood smoothing in GCN with multiple layers. The models proposed here are applicable to other demographic inference tasks.
Simple concatenation in FeatConcat is a strong baseline with large amounts of labelled data. GCN performs well with both large and small amounts of labelled data by effectively using unlabelled data. Gating mechanisms (e.g. highway gates) are essential for controlling neighbourhood smoothing in GCN with multiple layers. The models proposed here are applicable to other demographic inference tasks.
[]
GEM-SciDuet-train-119#paper-1323#slide-0
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-0
Topic modeling Word clustering
Method to extract latent topics on a corpus Each topic is a distribution on words yogurt rose LDA about milk oil Bulgaria food organic yogurt rose dance LDA about milk oil fire Bulgaria food organic sexy kazanlak walk exotic Size of each word re presents its frequency
Method to extract latent topics on a corpus Each topic is a distribution on words yogurt rose LDA about milk oil Bulgaria food organic yogurt rose dance LDA about milk oil fire Bulgaria food organic sexy kazanlak walk exotic Size of each word re presents its frequency
[]
GEM-SciDuet-train-119#paper-1323#slide-1
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-1
Existing work Andrzejewski ICML2009
Constraints on words for topic modeling Must-Link(A,B) A and B appear in the same topic Cannot-Link(A,B) A and B dont appear in the same topic W ant to split into fire dance dance dance CL Cannot-Link(fire, sexy) fire sexy
Constraints on words for topic modeling Must-Link(A,B) A and B appear in the same topic Cannot-Link(A,B) A and B dont appear in the same topic W ant to split into fire dance dance dance CL Cannot-Link(fire, sexy) fire sexy
[]
GEM-SciDuet-train-119#paper-1323#slide-2
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-2
Problem of the existing work
Constraints often dont align with users intention You might get blaze topic instead of fire dance topic W ant to split into fire dance blaze dance CL Cannot-Link(fire, sexy) fire sexy
Constraints often dont align with users intention You might get blaze topic instead of fire dance topic W ant to split into fire dance blaze dance CL Cannot-Link(fire, sexy) fire sexy
[]
GEM-SciDuet-train-119#paper-1323#slide-3
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-3
This work
Logical constraints on words for topic modeling Conjunctions (), disjunctions (), negations () W ant to split into fire dance ML dance dance CL fire sexy ancient bikini Must-Link(dance, sexy)) Algorithm to generate logically constrained distributions on LDA-DF We can not apply the existing algorithm This constraint cannot be mapped to a graph
Logical constraints on words for topic modeling Conjunctions (), disjunctions (), negations () W ant to split into fire dance ML dance dance CL fire sexy ancient bikini Must-Link(dance, sexy)) Algorithm to generate logically constrained distributions on LDA-DF We can not apply the existing algorithm This constraint cannot be mapped to a graph
[]
GEM-SciDuet-train-119#paper-1323#slide-4
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-4
Latent Dirichlet Allocation LDA Blei JMLR2003
Famous Topic modeling method (i) Assume a generative model of documents Each topic is a distribution on words Each document is a distribution on topics Taken from Dirichlet distributions to generate discrete distributions (ii) Infer parameters for the two distributions inverting the generative model
Famous Topic modeling method (i) Assume a generative model of documents Each topic is a distribution on words Each document is a distribution on topics Taken from Dirichlet distributions to generate discrete distributions (ii) Infer parameters for the two distributions inverting the generative model
[]
GEM-SciDuet-train-119#paper-1323#slide-5
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-5
Generative process of documents in LDA
Each topic is a distribution on words Each document is a distribution on topics Topic 1 Document 1 Topic 2 Document 2 yogurt yogurt milk yogurt food milk rose oil fruit food yogurt food milk bacteria fat drink fruit cream yogurt milk rose rose rose oil yogurt rose valley oil essential milk pure organic kazanlak quality rose food essential oil organic yogurt milk
Each topic is a distribution on words Each document is a distribution on topics Topic 1 Document 1 Topic 2 Document 2 yogurt yogurt milk yogurt food milk rose oil fruit food yogurt food milk bacteria fat drink fruit cream yogurt milk rose rose rose oil yogurt rose valley oil essential milk pure organic kazanlak quality rose food essential oil organic yogurt milk
[]
GEM-SciDuet-train-119#paper-1323#slide-6
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-6
Parameter inference in LDA
Infer word and topic distributions from a corpus inverting the generative process Topic 1 Document 1 yogurt milk yogurt food rose oil fruit food yogurt milk bacteria fat drink cream yogurt milk rose Topic 2 Document 2 rose oil yogurt rose valley kazanlak quality rose food oil organic yogurt milk
Infer word and topic distributions from a corpus inverting the generative process Topic 1 Document 1 yogurt milk yogurt food rose oil fruit food yogurt milk bacteria fat drink cream yogurt milk rose Topic 2 Document 2 rose oil yogurt rose valley kazanlak quality rose food oil organic yogurt milk
[]
GEM-SciDuet-train-119#paper-1323#slide-7
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-7
LDA DF Andrzejewski ICML2009
Semi-supervised extension of LDA Only conjunction of Must-Links and Cannot-Links Must-Link(A,B) A and B appear in the same topic Cannot-Link(A,B) A and B dont appear in the same topic Extending the generative process Each topic is a constrained distribution on words Taken from a Dirichlet tree distribution, which is a generalization of a Dirichlet distribution Each document is a distribution on topics Taken from a Dirichlet distribution
Semi-supervised extension of LDA Only conjunction of Must-Links and Cannot-Links Must-Link(A,B) A and B appear in the same topic Cannot-Link(A,B) A and B dont appear in the same topic Extending the generative process Each topic is a constrained distribution on words Taken from a Dirichlet tree distribution, which is a generalization of a Dirichlet distribution Each document is a distribution on topics Taken from a Dirichlet distribution
[]
GEM-SciDuet-train-119#paper-1323#slide-8
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-8
Generative process of LDA DF
Always generates a distribution, where yogurt and rose do not appear in the same topic. Topic 1 Document 1 yogurt yogurt milk yogurt food milk rose oil fruit food yogurt food milk bacteria fat drink CL fruit cream yogurt milk rose Topic 2 Document 2 rose rose oil yogurt rose valley oil essential milk pure organic kazanlak quality rose food essential oil organic yogurt milk
Always generates a distribution, where yogurt and rose do not appear in the same topic. Topic 1 Document 1 yogurt yogurt milk yogurt food milk rose oil fruit food yogurt food milk bacteria fat drink CL fruit cream yogurt milk rose Topic 2 Document 2 rose rose oil yogurt rose valley oil essential milk pure organic kazanlak quality rose food essential oil organic yogurt milk
[]
GEM-SciDuet-train-119#paper-1323#slide-9
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-9
Algorithm to generate distributions in LDA DF
1. Map links to a graph 3. Extract the maximal independent sets (MIS) 4. Generate a distribution based on each MIS Any conjunction of links can be mapped to a graph Regard two words on each Must-Link as one word MIS = Maximal set of nodes without edges Equalize the frequencies of contracted words Zero the frequencies of words not in the MIS A B C D E F G CL ML CL CD FA G
1. Map links to a graph 3. Extract the maximal independent sets (MIS) 4. Generate a distribution based on each MIS Any conjunction of links can be mapped to a graph Regard two words on each Must-Link as one word MIS = Maximal set of nodes without edges Equalize the frequencies of contracted words Zero the frequencies of words not in the MIS A B C D E F G CL ML CL CD FA G
[]
GEM-SciDuet-train-119#paper-1323#slide-10
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-10
Negations
Delete negations () in a preprocessing stage Weak negation: Must-Link(A,B) = no constraint (A and B need not appear in the same topic) Strong negation: Must-Link(A,B) Cannot-Link(A,B) Focus only on conjunctions and disjunctions
Delete negations () in a preprocessing stage Weak negation: Must-Link(A,B) = no constraint (A and B need not appear in the same topic) Strong negation: Must-Link(A,B) Cannot-Link(A,B) Focus only on conjunctions and disjunctions
[]
GEM-SciDuet-train-119#paper-1323#slide-11
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-11
Key observation for logical expressions
Any constrained distribution is represented by a conjunctive expression by two primitives ZeroPrim(A): makes p(A)0 Same Equal frequency A B C D E F G EqualPrim(B, E) EqualPrim(C, D)
Any constrained distribution is represented by a conjunctive expression by two primitives ZeroPrim(A): makes p(A)0 Same Equal frequency A B C D E F G EqualPrim(B, E) EqualPrim(C, D)
[]
GEM-SciDuet-train-119#paper-1323#slide-12
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-12
Substitution of links with primitives
A B C A B C
A B C A B C
[]
GEM-SciDuet-train-119#paper-1323#slide-13
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-13
Proposed algorithm for logical expressions
1. Substitute links with primitives 2. Calculate the minimum disjunctive normal form (DNF) of the primitives 3. Generate distributions for each conjunction of the DNF A B C A B C primitives DNF = Disjunction of conjunctions of primitives Combine each conjunction of primitives A B C D E F G A B C D E F G A B C D E F G
1. Substitute links with primitives 2. Calculate the minimum disjunctive normal form (DNF) of the primitives 3. Generate distributions for each conjunction of the DNF A B C A B C primitives DNF = Disjunction of conjunctions of primitives Combine each conjunction of primitives A B C D E F G A B C D E F G A B C D E F G
[]
GEM-SciDuet-train-119#paper-1323#slide-14
1323
Topic Models with Logical Constraints on Words
This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177 ], "paper_content_text": [ "Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.", "When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.", "Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.", "Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .", "For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.", "Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.", "We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.", "In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.", "However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?", "Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.", "In this paper, we address such logical expressions of links on LDA-DF framework.", "Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.", "At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.", "This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.", "LDA with Dirichlet Forest Priors We briefly review LDA-DF.", "Let w := w 1 .", ".", ".", "w n be a corpus consisting of D documents, where n is the total number of words in the documents.", "Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.", "Let T be the number of topics.", "As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.", "The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.", "The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.", "The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.", "The trees assigned to topics z are denoted as q.", "In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.", "1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.", "This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).", "In the case of CLs, we use the following algorithm.", "For examples, the algorithm creates the two trees in Fig.", "1 (b) for the constraint CL(A, B) ∧ CL(A, C).", "The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.", "Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .", "I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .", "C z (s) represents the set of children of node s in tree q z .", "γ (k) z represents a weight of the edge to node k in tree q z .", "Additionally, we define ∑ S s := ∑ s∈S .", "Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s   Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z )   , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .", "|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.", "After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.", "θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).", "We denote it as (∧,∨,¬)-expressions.", "Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.", "Interpretation of negations is discussed in Sec.", "3.4.", "(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.", "The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.", "One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.", "1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.", "1(b) .", "The right tree of Fig.", "1(b) is created by Np(B) ∧ Np(C).", "Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.", "1.", "Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).", "2.", "Calculate the minimum DNF of the primitives.", "3.", "Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.", "Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.", "1.", "We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .", "In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.", "Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.", "We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.", "Definition 1 (Asymptotic Topic Family).", "For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .", "ATF expresses all combinations of words that can occur in a topic when η is large.", "In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.", "As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.", "Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).", "If you want to remove {B} and {C}, you can use exclusive disjunctions.", "For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.", "The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.", "Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.", "imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .", "Proof.", "For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.", "We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .", "When |ℓ| = 1, f * ℓ = U ℓ is trivial.", "Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.", "In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.", "By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.", "This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.", "Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.", "This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.", "Definition 3 (Asymptotic Equivalence Relation).", "Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .", "We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .", "The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.", "Proposition 4.", "For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.", "We prove (a) only.", "We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.", "In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.", "The experimental result shown in Tab.", "1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.", "Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.", "Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).", "In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.", "4. with respect to W = {A, B} is {∅, {A, B}, {B}}.", "Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .", "However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .", "The asymptotic equivalency can fulfill the anticipation with the next proposition.", "This simultaneously suggests that our definition is semantically valid.", "IL(B, A) ≍ ML(A, B) Proof.", "From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.", "This informally means ∧ n i=1 X i → Y as an extension of A → B.", "In this case, we set Proposition 5.", "For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).", "When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).", "This is easier than considering CLs between highfrequency words and unnecessary words as described in ).", "Negation of Links There are two types of interpretation for negation of links.", "One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".", "We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.", "We consider the strong negation in this study.", "According to Def.", "1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .", "However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.", "Thus we define it to be fit in strong negation as follows.", "Definition 6 (ATF of strong negation of links).", "Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .", "Note that the definition is used not for primitives but for links.", "Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.", "The next proposition gives the way to remove the negation of each link treated in this study.", "We define no constraint condition as ϵ for the result of ISL.", "Proposition 7.", "For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.", "We prove (a) only.", "(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .", "We set topic size as T = 2.", "The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.", "We abbreviate the grouping type as AB|AC.", "In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.", "Thus, we naively classify a grouping type of each result into the four types.", "Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.", "Fig.", "2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.", "The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.", "Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).", "The result of (1) is the same result as LDA, because of no constraints.", "In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.", "As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.", "The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.", "Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.", "The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).", "The reason is that (5) allows A to appear with C, while (4) does not.", "In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.", "Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.", "Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .", "In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.", "We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.", "2.", "To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.", "After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.", "Each topic label is determined by looking carefully at highfrequency words in the topic.", "To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.", "However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.", "Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.", "We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.", "After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.", "Note that our solution is not ad-hoc, and we can easily apply it to similar problems.", "Conclusions We proposed a simple method to achieve topic models with logical constraints on words.", "Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.", "As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.", "We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.", "In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.", "Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.", "Topic High frequency words in each topic ?", "have give night film turn performance ?", "not life have own first only family tell ?", "movie have n't get good not see ?", "have black scene tom death die joe ?", "film have n't not make out well see Isolated have film movie not good make n't ?", "star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?", "science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.", "In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "LDA with Dirichlet Forest Priors", "Logical Constraints on Words", "(∧,∨)-expressions of Links", "Shrinking Dirichlet Forests", "Customizing New Links", "Negation of Links", "Comparison on a Synthetic Corpus", "Interactive Topic Analysis", "Conclusions" ] }
GEM-SciDuet-train-119#paper-1323#slide-14
Correctness of our method
[Theorem] Our method and the existing method are asymptotically equivalent w.r.t. conjunctive expressions of links A B C D A B C D istributions by primitives CL Primitives are the same as CL d is trib u tio ns by a grap h A B C D A B C D B C A B C D E F G A B C D E F G
[Theorem] Our method and the existing method are asymptotically equivalent w.r.t. conjunctive expressions of links A B C D A B C D istributions by primitives CL Primitives are the same as CL d is trib u tio ns by a grap h A B C D A B C D B C A B C D E F G A B C D E F G
[]