gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-124#paper-1342#slide-13
1342
Document Modeling with External Attention for Sentence Extraction
Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .", "A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.", "However, document modeling, a key to many natural language understanding tasks, is still an open challenge.", "Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .", "Lin et al.", "(2015) and Yang et al.", "(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.", "Tran et al.", "(2016) further proposed a contextual language model that considers information at interdocument level.", "It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .", "In this paper, we formalize the use of external information to further guide document modeling for end goals.", "We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"", "Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.", "Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.", "(2018) in that it derives the document meaning representation from its sentences and their constituent words.", "Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.", "Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.", "We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.", "These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.", "Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.", "For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.", "For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.", "Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.", "Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.", "We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .", "Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.", "We also conduct a human evaluation to judge which type of summary participants prefer.", "Our results overwhelmingly show that human subjects find our summaries more informative and complete.", "Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .", "Our model with ISF and IDF scores as external features achieves competitive results for answer selection.", "Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .", "We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.", "Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.", "Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .", "The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.", "The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.", "Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.", "We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.", "This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.", "We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.", "In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.", "The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .", "The final sentence embeddings have six dimensions.", "Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.", "We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .", "Given a document D consisting of a sequence of sentences (s 1 , s 2 , .", ".", ".", ", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .", "Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.", "It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.", "Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).", "Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.", "At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.", "This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.", "Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .", "h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.", "s 1 , .", ".", ".", ", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.", "For the extractive summarization task, e i s are external information such as title and image captions.", "For the answers selection task, e i s are the query and word overlap features.", "ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .", ".", ".", ", e p ).", "Figure 1 summarizes our model.", "Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.", "Both these tasks require local and global contextual reasoning about a given document.", "As such, they test the ability of our model to facilitate document modeling using external information.", "Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).", "In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.", "We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.", "We formulate external information E as the sequence of the title and the image captions associated with the document.", "We use the convolutional sentence encoder to get their sentence-level representations.", "Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.", "In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.", "We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.", "We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.", "This simplifies Eq.", "(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.", "We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.", "Trischler et al.", "(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.", "The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.", "Note that, s i ∩ q refers to the set of words that appear both in s i and in q.", "Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.", "More formally, this modifies Eq.", "(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .", "The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.", "In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.", "Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.", "In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.", "Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .", "2 We used the standard splits of Hermann et al.", "(2015) for training, validation, and testing (90,266/1,220/1,093 documents).", "We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.", "We trained our network on a named-entity-anonymized version of news articles.", "However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.", "To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).", "We followed Nallapati et al.", "(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.", "We used a modified script of Hermann et al.", "(2015) to extract titles and image captions, and we associated them with the corresponding articles.", "All articles get associated with their titles.", "The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.", "There are 40% CNN articles with at least one image caption.", "All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.", "All input documents were padded with zeros to a maximum document length of 126.", "For each document, we consider a maximum of 10 image captions.", "We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.", "We refer the reader to the supplementary material for more implementation details to replicate our results.", "Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.", "We refer to this baseline as LEAD in the rest of the paper.", "We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .", "We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .", "3 It does not exploit any external information.", "4 The architecture of POINTERNET is closely related to our model without external information.", "4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.", "6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.", "In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.", "teresting direction of research but we do not pursue it here.", "It requires decoding with multiple types of attentions and this is not the focus of this paper.", "5 We are unable to compare our results to the extractive system of Nallapati et al.", "(2017) because they report their results on the DailyMail dataset and their code is not available.", "The abstractive systems of Chen et al.", "(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.", "We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.", "6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"", "We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.", "For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.", "We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.", "We experimented with two types of external information: title (TITLE) and image captions (CAPTION).", "In addition, we experimented with the first sentence (FS) of the document as external information.", "Note that the latter is not external information, it is a sentence in the document.", "However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .", "XNET with FS acts as a baseline for XNET with title and image captions.", "We report the performance of several variants of XNET on the validation set in Table 1 .", "We also compare them against the LEAD baseline and POINTERNET.", "These two systems do not use any additional information.", "Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.", "When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.", "Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .", "The performance with TITLE and CAP-TION is better than that with FS.", "We also tried possible combinations of TITLE, CAPTION and FS.", "All XNET models are superior to the ones without any external information.", "XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).", "It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.", "We use this model for testing purposes.", "Our final results on the test set are shown in to XNET.", "This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.", "This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.", "We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.", "It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.", "XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.", "Human Evaluation We complement our automatic evaluation results with human evaluation.", "We randomly selected 20 articles from the test set.", "Annotators were presented with a news article and summaries from four different systems.", "These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.", "We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)", "and fluency (is the summary written in well-formed English?).", "We did not allow any ties and we only sampled articles with nonidentical summaries.", "We assigned this task to five annotators who were proficient English speakers.", "Each annotator was presented with all 20 articles.", "The order of summaries to rank was randomized per article.", "An example of summaries our subjects ranked is provided in the supplementary material.", "The results of our human evaluation study are shown in Table 3 .", "As one might imagine, HUMAN gets ranked 1st most of the time (41%).", "However, it is closely followed by XNET which ranked 1st 28% of the time.", "In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.", "We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).", "It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.", "On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.", "The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.", "Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .", "NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.", "It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.", "(2015) .", "In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.", "WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.", "A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.", "We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.", "In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.", "Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.", "For validation, we set apart 10% of each official training set.", "Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.", "Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.", "Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.", "We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .", "The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.", "The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).", "In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .", "We experiment with several variants of our model.", "XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .", "(WGT) WRD CNT stands for the (weighted) word count baseline.", "See text for more details.", "tence extractor conditioned only on the query q as external information (Eq.", "(3) ).", "XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.", "(4) ).", "We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.", "In our experiments, we set k = 5.", "In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.", "It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.", "We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.", "Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).", "Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.", "Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.", "Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.", "This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.", "Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.", "Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.", "This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.", "Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.", "Using it as a hard constraint, with XNETTOPK, does not achieve the best result.", "We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.", "As such, XNET+ is capable of using this feature in datasets with richer context.", "It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.", "For the SQuAD dataset, the results are comparable (less than 1%).", "However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.", "This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.", "Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.", "7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.", "It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.", "As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.", "This can be observed by the fact that XNET and PAIRCNN obtain comparable results.", "COMPAGGR performs better because comparing each candidate independently is a better strategy.", "Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.", "We implement our approach through an attention mechanism of a neural network architecture for modeling documents.", "Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.", "Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.", "Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.", "For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Document Modeling For Sentence Extraction", "Sentence Extraction Applications", "Experiments and Results", "Extractive Document Summarization", "Answer Selection", "Conclusion" ] }
GEM-SciDuet-train-124#paper-1342#slide-13
Summary
a robust neural model for sentence extraction that reads whole document before extraction attends to external information attending to external information ( title and caption) helps creating better extractive summaries attending to question and word overlap metrics helps with question answer selection considering external information helps
a robust neural model for sentence extraction that reads whole document before extraction attends to external information attending to external information ( title and caption) helps creating better extractive summaries attending to question and word overlap metrics helps with question answer selection considering external information helps
[]
GEM-SciDuet-train-125#paper-1343#slide-0
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-0
Amr
After its competitor invented the front loading washing machine, capable-41 person countermeasure invent-01 the CEO of the American IM ARG2 A RG0-of p urpose mod ARG1 ARG0 ARG1 innovate-01 have-org-role-91 innovate-01 strategy machine company believed that each of ARG0 ARG2 p rep-in ARG1-of ARG0-of company its employees had the ability for person CEO industry load-01 wash-01 ARG0-of ARG1-of mod A RG1 mod compete-01 innovation, and formulated employ-01 each front ARG1 ARG0 strategic countermeasures for company innovation in the industry. name mod name country op1 name IM name op1 op2 United States
After its competitor invented the front loading washing machine, capable-41 person countermeasure invent-01 the CEO of the American IM ARG2 A RG0-of p urpose mod ARG1 ARG0 ARG1 innovate-01 have-org-role-91 innovate-01 strategy machine company believed that each of ARG0 ARG2 p rep-in ARG1-of ARG0-of company its employees had the ability for person CEO industry load-01 wash-01 ARG0-of ARG1-of mod A RG1 mod compete-01 innovation, and formulated employ-01 each front ARG1 ARG0 strategic countermeasures for company innovation in the industry. name mod name country op1 name IM name op1 op2 United States
[]
GEM-SciDuet-train-125#paper-1343#slide-1
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-1
Transition based AMR parsing
There has been previous work (Sagae and Tsujii; Damonte et al.; Zhou et al.; Ribeyre et al.; Wang et al.) on transition-based graph parsing. Our work introduces a new data structure cache for generating graphs of certain treewidth.
There has been previous work (Sagae and Tsujii; Damonte et al.; Zhou et al.; Ribeyre et al.; Wang et al.) on transition-based graph parsing. Our work introduces a new data structure cache for generating graphs of certain treewidth.
[]
GEM-SciDuet-train-125#paper-1343#slide-2
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-2
Introduction to treewidth
Gildea, Satta, and Peng ALB LBR BRD RDM DMF MF w I A B D F G J K L R M O E KAL DMP j s H IKA MPH Q m N HCS CSQ SQ Figure 4 Graph for the semantic representation of the sentence John wants Mary to succeed. (a) Vertex w A tree: treewidth 1 treewidth 2 Complete graph of N nodes: treewidth (b) N-1 represents word token wants, vertex j represents John, vertex s represents succeed, and vertex m represents Mary. Figure 2 (a) An optimal tree decomposition of graph G in Figure 1; this is a set of overlapping cl Gs vertices, arranged in a tree. (b) The high-level treelike structure of G becomes appa capable-41 person countermeasure invent-01 ARG2 ARG0-of purpose mod ARG1 ARG0 ARG1 innovate-01 have-org-role-91 innovate-01 strategy machine ARG0 ARG2 p rep-in ARG1-of ARG0-of company person CEO industry load-01 wash-01 ARG0-of ARG1-of mod A RG1 m od compete-01 employ-01 each front ARG1 ARG0 company name mod name country op1 name IM name op1 op2 United States small tree width large tree width
Gildea, Satta, and Peng ALB LBR BRD RDM DMF MF w I A B D F G J K L R M O E KAL DMP j s H IKA MPH Q m N HCS CSQ SQ Figure 4 Graph for the semantic representation of the sentence John wants Mary to succeed. (a) Vertex w A tree: treewidth 1 treewidth 2 Complete graph of N nodes: treewidth (b) N-1 represents word token wants, vertex j represents John, vertex s represents succeed, and vertex m represents Mary. Figure 2 (a) An optimal tree decomposition of graph G in Figure 1; this is a set of overlapping cl Gs vertices, arranged in a tree. (b) The high-level treelike structure of G becomes appa capable-41 person countermeasure invent-01 ARG2 ARG0-of purpose mod ARG1 ARG0 ARG1 innovate-01 have-org-role-91 innovate-01 strategy machine ARG0 ARG2 p rep-in ARG1-of ARG0-of company person CEO industry load-01 wash-01 ARG0-of ARG1-of mod A RG1 m od compete-01 employ-01 each front ARG1 ARG0 company name mod name country op1 name IM name op1 op2 United States small tree width large tree width
[]
GEM-SciDuet-train-125#paper-1343#slide-3
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-3
Tree decomposition
Gildea, Satta, and Peng Gildea, Satta, and Peng I D J ALB LBR BRD RDM DMF MFO FOG A B F G I A B D F G J K L R M O E P KAL DMP OGE P H IKA MPH GEJ N HCS CSQ SQN (a) (b)
Gildea, Satta, and Peng Gildea, Satta, and Peng I D J ALB LBR BRD RDM DMF MFO FOG A B F G I A B D F G J K L R M O E P KAL DMP OGE P H IKA MPH GEJ N HCS CSQ SQN (a) (b)
[]
GEM-SciDuet-train-125#paper-1343#slide-4
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-4
Cache transition system
Stack $: place for temporarily storing concepts Cache *: working zone for making edges, fixed size corresponding to the treewidth. Buffer ': unprocessed concepts E: set of already-built edges SHIFT PUSH(i): shift one concept from buffer to right- most position of cache, then select one concept (index i) from cache to stack. SHIFT PUSH(1) stack cache buffer POP: pop the top from stack and put back to cache, then drop the right-most item from cache. Arc(i, l, d): make an arc (with direction d, label l) between the right-most node to node i. Arc(i,-,-) represents no edge between them.
Stack $: place for temporarily storing concepts Cache *: working zone for making edges, fixed size corresponding to the treewidth. Buffer ': unprocessed concepts E: set of already-built edges SHIFT PUSH(i): shift one concept from buffer to right- most position of cache, then select one concept (index i) from cache to stack. SHIFT PUSH(1) stack cache buffer POP: pop the top from stack and put back to cache, then drop the right-most item from cache. Arc(i, l, d): make an arc (with direction d, label l) between the right-most node to node i. Arc(i,-,-) represents no edge between them.
[]
GEM-SciDuet-train-125#paper-1343#slide-5
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-5
Example of cache transition
(2, UNIVERSITYf ROCHESTER NO Action taken: SHIFT, PUSH(1) ee Cette | Necceeeee bene e eas stack cache x ARGI>S. buffer / ARGO a / ARG1 ~*s. ie: / a7 TT TSK sy Hypothesis: IN a RY Action taken: POP POP POP
(2, UNIVERSITYf ROCHESTER NO Action taken: SHIFT, PUSH(1) ee Cette | Necceeeee bene e eas stack cache x ARGI>S. buffer / ARGO a / ARG1 ~*s. ie: / a7 TT TSK sy Hypothesis: IN a RY Action taken: POP POP POP
[]
GEM-SciDuet-train-125#paper-1343#slide-6
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-6
Sequence to sequence models for cache transition system
Concepts are generated from input sentences by another classifier in the preprocessing step. Separate encoders are adopted for input sentences and sequences of concepts, respectively. One decoder for generating transition actions.
Concepts are generated from input sentences by another classifier in the preprocessing step. Separate encoders are adopted for input sentences and sequences of concepts, respectively. One decoder for generating transition actions.
[]
GEM-SciDuet-train-125#paper-1343#slide-7
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-7
Seq2seq soft attentionfeatures
want-O1 go-01 Input sequence Concept sequence
want-O1 go-01 Input sequence Concept sequence
[]
GEM-SciDuet-train-125#paper-1343#slide-8
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-8
Seq2seq hard attentionfeatures
NOARC ARC L-ARGO SHIFT Pushindex(1) Per want-0O. go-01 Input sequence Concept sequence
NOARC ARC L-ARGO SHIFT Pushindex(1) Per want-0O. go-01 Input sequence Concept sequence
[]
GEM-SciDuet-train-125#paper-1343#slide-10
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-10
AMR Coverage with different cache sizes
An example AMR graph for the sentence: John wants Mary to like him.
An example AMR graph for the sentence: John wants Mary to like him.
[]
GEM-SciDuet-train-125#paper-1343#slide-11
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-11
Development results
Model P R F cache size P R F Impact of various components Impact of cache size
Model P R F cache size P R F Impact of various components Impact of cache size
[]
GEM-SciDuet-train-125#paper-1343#slide-12
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-12
Main results
Model P R F
Model P R F
[]
GEM-SciDuet-train-125#paper-1343#slide-13
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-13
Accuracy on reentrancies
Model P R F
Model P R F
[]
GEM-SciDuet-train-125#paper-1343#slide-14
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-14
Reentrancy example
Sentence: I have no desire to live in any city . ARG0 location polarity JAMR output: ARG1 mod i desire-01 live-01 any city location Peng et al. (2018) output: polarity ARG1 mod Our hard attention output: ARG0 location ARG0 polarity ARG1 mod
Sentence: I have no desire to live in any city . ARG0 location polarity JAMR output: ARG1 mod i desire-01 live-01 any city location Peng et al. (2018) output: polarity ARG1 mod Our hard attention output: ARG0 location ARG0 polarity ARG1 mod
[]
GEM-SciDuet-train-125#paper-1343#slide-15
1343
Sequence-to-sequence Models for Cache Transition Systems
In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013 ) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph.", "Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts.", "AMR has been used in various applications such as text summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "1 The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq The task of AMR graph parsing is to map natural language strings to AMR semantic graphs.", "Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017) .", "On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing.", "Peng et al.", "(2017) propose a linearization approach that encodes labeled graphs as sequences.", "To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models.", "Konstas et al.", "(2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate.", "However, the final performance still falls behind the best-performing models.", "The best performing AMR parsers model graph structures directly.", "One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system of , which is currently the top performing system.", "This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al.", "(2015) , who use stack LSTMs to capture action history information in the transition state of the transition system.", "Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions.", "Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs.", "They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer.", "propose a special transition framework called a cache transition system to generate the set of semantic graphs.", "They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer.", "apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues.", "In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems.", "We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output.", "The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data.", "More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories.", "We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing.", "We extend the hard attention model of Aharoni and Goldberg (2017) , which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right.", "When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order.", "On the decoder side, we augment the prediction of output action with embedding features from the current transition state.", "Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing.", "Cache Transition Parser We adopt the transition system of , which has been shown to have good coverage of the graphs found in AMR.", "A cache transition parser consists of a stack, a cache, and an input buffer.", "The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position.", "The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph.", "(We use the terms concept and vertex interchangeably in this paper.)", "Finally, the cache is a sequence of concepts η = [v 1 , .", ".", ".", ", v m ].", "The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element.", "Operationally, the functioning of the parser can be described in terms of configurations and transitions.", "A configuration of our parser has the form: C = (σ, η, β, G p ) where σ, η and β are as described above, and G p is the partial graph that has been built so far.", "The initial configuration of the parser is ([], [$, .", ".", ".", ", $], [c 1 , .", ".", ".", ", c n ], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $.", "The buffer is initialized with all the graph vertices constrained by the order of the input sentence.", "The final configuration is ([], [$, .", ".", ".", ", $] , [], G), where the stack and the cache are as in the initial configuration and the buffer is empty.", "The constructed graph is the target AMR graph.", "In the first step, which is called concept identification, we map the input sentence w 1:n = w 1 , .", ".", ".", ", w n to a sequence of concepts c 1:n = c 1 , .", ".", ".", ", c n .", "We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later.", "As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size.", "The transitions of the parser are specified as follows.", "1.", "Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from.", "We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache.", "2.", "Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph.", "3.", "PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache.", "We also take out the concept v i appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.", "2 4.", "Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache.", "The label l is NULL if no arc is made and we use the action NOARC in this case.", "Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache.", "We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair.", "Given the sentence \"John wants to go\" and the recognized concept sequence \"Per want-01 go-01\" (person name category Per for \"John\"), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3.", "Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by .", "Let E G be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph.", "The oracle algorithm can look into E G to decide which transition to take next, or else to decide that it should fail.", "This decision is based on the mutually exclusive rules listed below.", "ShiftOrPop phase: the oracle chooses transi - tion Pop, in case there is no edge (v m , v) in E G such that vertex v is in S, or chooses tran- sition Shift and proceeds to the next phase.", "2.", "PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack.", "The oracle chooses transition PushIndex(i) and proceeds to the next phase.", "3.", "ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them.", "If there is an arc, the oracle chooses its direction and label.", "After arc decisions to m−1 cache concepts are made, we jump to the next step.", "4.", "If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step.", "We use the equation below to choose the cache concept to take out in the step PushIndex(i).", "For j ∈ [|β|], we write β j to denote the j-th vertex in β.", "We choose a vertex v i * in η such that: In words, v i * is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β.", "We move out of the cache vertex v i * and push it onto the stack, for later processing.", "i * = argmax i∈[m] min {j | (v i , β j ) ∈ E G } For each training example (x 1:n , g), the transition system generates the output AMR graph g from the input sequence x 1:n through an oracle sequence a 1:q ∈ Σ * a , where Σ a is the union of all possible actions.", "We model the probability of the output with the action sequence: P (a 1:q |x 1:n ) = q t=1 P (a t |a 1 , .", ".", ".", ", a t−1 , x 1:n ; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section.", "Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3 , our sequence-to-sequence model takes a word sequence w 1:n and its mapped concept sequence c 1:n as the input, and the action sequence a 1:q as the output.", "It uses two BiLSTM encoders, each encoding an input sequence.", "As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below.", "BiLSTM Encoder Given an input word sequence w 1:n , we use a bidirectional LSTM to encode it.", "At each step j, the current hidden states ← − h w j and − → h w j are generated from the previous hidden states ← − h w j+1 and − → h w j−1 , and the representation vector x j of the current input word w j : ← − h w j = LSTM( ← − h w j+1 , x j ) − → h w j = LSTM( − → h w j−1 , x j ) The representation vector x j is the concatenation of the embeddings of its word, lemma, and POS tag, respectively.", "Then the hidden states of both directions are concatenated as the final hidden state for word w j : h w j = [ ← − h w j ; − → h w j ] Similarly, for the concept sequence, the final hidden state for concept c j is: h c j = [ ← − h c j ; − → h c j ] LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories H w and H c , where H w is the concatenation of the state vectors of all input words, and H c for input concepts correspondingly: H w = [h w 1 ; h w 2 ; .", ".", ".", "; h w n ] (1) H c = [h c 1 ; h c 2 ; .", ".", ".", "; h c n ] (2) The decoder yields an action sequence a 1 , a 2 , .", ".", ".", ", a q as the output by calculating a sequence of hidden states s 1 , s 2 .", ".", ".", ", s q recurrently.", "While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model s t−1 ; (2) the embedding of the previous generated action e t−1 ; and (3) the previous context vectors for words µ w t−1 and concepts µ c t−1 , which are calculated using H w and H c , respectively.", "When t = 1, we initialize µ 0 as a zero vector, and set e 0 to the embedding of the start token \" s \".", "The hidden state s 0 is initialized as: s 0 = W d [ ← − h w 1 ; − → h w n ; ← − h c 1 ; − → h c n ] + b d , where W d and b d are model parameters.", "For each time-step t, the decoder feeds the concatenation of the embedding of previous action e t−1 and the previous context vectors for words µ w t−1 and concepts µ c t−1 into the LSTM model to update its hidden state.", "s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly.", "Take the word sequence as an example, α w t,i on h w i ∈ H w for time-step t is calculated as: t,i = v T c tanh(W h h w i + W s s t + b c ) α w t,i = exp( t,i ) N j=1 exp( t,j ) W h , W s , v c and b c are model parameters.", "The new context vector µ w t = n i=1 α w t,i h w i .", "The calculation of µ c t follows the same procedure, but with a different set of model parameters.", "The output probability distribution over all actions at the current state is calculated by: (4) where V a and b a are learnable parameters, and the number of rows in V a represents the number of all actions.", "The symbol Σ a is the set of all actions.", "P Σa = softmax(V a [s t ; µ w t ; µ c t ] + b a ), Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position.", "The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence.", "As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, l w and l c , to model monotonic attention to word and concept sequences respectively.", "The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ]) (5) Control Mechanism.", "Both pointers are initialized as 0 and advanced to the next position deterministically.", "We move the concept attention focus l c to the next position after arc decisions to all the other m − 1 cache concepts are made.", "We move the word attention focus l w to its aligned position in case the new concept is aligned, otherwise we don't move the word focus.", "As shown in Figure 4 , after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01.", "As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to.", "Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration C t : e f (C t ) = [e f 1 (C t ); e f 2 (C t ); · · · ; e f l (C t )] where l is the number of features extracted from C t and e f k (C t ) (k = 1, .", ".", ".", ", l) represents the embedding for the k-th feature, which is learned during training.", "These feature embeddings are concatenated as e f (C t ), and fed as additional input to the decoder.", "For the soft attention decoder: s t = LSTM(s t−1 , [e t−1 ; µ w t−1 ; µ c t−1 ; e f (C t )]) and for the hard attention decoder: s t = LSTM(s t−1 , [e t−1 ; h w lw ; h c lc ; e f (C t )]) We use the following features in our experiments: 1.", "Phase type: indicator features showing which phase the next transition is.", "2.", "ShiftOrPop features: token features 3 for the rightmost cache concept and the leftmost buffer concept.", "Number of dependencies to words on the right, and the top three dependency labels for them.", "3.", "ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to.", "Word, concept and dependency distance between the two concepts.", "The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs.", "Dependency label between the two positions if there is a dependency arc between them.", "4.", "PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache.", "The phase type features are deterministic from the last action output.", "For example, if the last action output is Shift, the current phase type would be PushIndex.", "We only extract corresponding features for this phase and fill all the other feature types with -NULLas placeholders.", "The features for other phases are similar.", "AMR Parsing Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a * 1 , .", ".", ".", ", a * q : L = − q t=1 log P (a * t |a * 1 , .", ".", ".", ", a * t−1 , X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters.", "Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set.", "Dropout with rate 0.3 is used during training.", "Beam search with a beam size of 10 is used for decoding.", "Both training and decoding use a Tesla K20X GPU.", "Hidden state sizes for both encoder and decoder are set to 100.", "The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training.", "The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively.", "Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment.", "We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT) 4 , numbers (NUMBER) and phrases (PHRASE).", "The phrases are extracted based on the multiple-to-one alignment in the training data.", "One example phrase is more than which aligns to a single concept more-than.", "We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014) , which greedily aligns a span of words to AMR subgraphs using a set of heuristics.", "This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side.", "We use the semi-Markov model from Flanigan et al.", "(2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph.", "During decoding, our output has categories, and we need to map each category to the corresponding AMR concept or subgraph.", "We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation.", "We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens.", "Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016) .", "The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.", "All parsing results are measured by Smatch (version 2.0.2) .", "Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment.", "We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs.", "For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009 ) and extract date entities by matching spans with the date template.", "We further categorize the dataset with the categories we have defined.", "After categorization, we use Stanford CoreNLP to get the POS tags and dependencies of the categorized dataset.", "We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases.", "We use a cache size of 5 in our experiments.", "Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history.", "The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively.", "Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.", "use a separate feedforward network to predict each phase independently.", "We use the same alignment from the SemEval dataset as in to avoid differences resulting from the aligner.", "Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention.", "We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset.", "The sequence-to-sequence models perform better than the feedforward model of on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases.", "On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel.", "One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update.", "Table 2 shows the impact of different components for the sequence-to-sequence model.", "We can see that the transition state features play a very important role for predicting the correct transition action.", "This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction.", "Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints.", "We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower.", "Impact of Different Components Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy.", "While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make.", "From Table 3 we can see that Comparison with other Parsers Table 4 shows the comparison with other AMR parsers.", "The first three systems are some competitive neural models.", "We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017) .", "Konstas et al.", "(2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences.", "Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction.", "Our model also We also show the performance of some of the best-performing models.", "While our hard attention achieves slightly lower performance in comparison with Wang et al.", "(2015a) and , it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours.", "The alignment from the aligner and the concept identification identifier also play an important role for improving the performance.", "propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model.", "Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al.", "(2017) .", "From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of in predicting reentrancies.", "This is because predicting reentrancy is directly related to the Ar-cBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase.", "We also compare the reentrancy results of our transition system with two other systems, Damonte et al.", "(2017) and JAMR, where these statistics are available.", "From Table 5 , we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies.", "Figure 5 shows a reentrancy example where JAMR and the feedforward network of do not predict well, while our system predicts the correct output.", "JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from \"live-01\" to \"-\" instead of from \"desire-01\".", "The feedforward model of and live-01 to i.", "This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant.", "Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5 .", "When desire-01 and live-01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired.", "Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing.", "To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models.", "We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available.", "While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "1.", "3", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Cache Transition Parser", "Oracle Extraction Algorithm", "ShiftOrPop phase: the oracle chooses transi", "Soft vs Hard Attention for", "BiLSTM Encoder", "LSTM Decoder with Soft Attention", "Monotonic Hard Attention for Transition Systems", "Transition State Features for Decoder", "Training and Decoding", "Preprocessing and Postprocessing", "Experiments", "Experiment Settings", "Results", "Conclusion" ] }
GEM-SciDuet-train-125#paper-1343#slide-15
Conclusion
Cache transition system based on a mathematical sound formalism for parsing to graphs. The cache transition process can be well-modeled by sequence-to-sequence models. Features from transition states.
Cache transition system based on a mathematical sound formalism for parsing to graphs. The cache transition process can be well-modeled by sequence-to-sequence models. Features from transition states.
[]
GEM-SciDuet-train-126#paper-1344#slide-0
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-0
Task Knowledge Base Completion
Knowledge Bases (KBs) store a large amount of facts in the form of <head entity, relation, tail entity> triples: The Knowledge Base Completion (KBC) task aims to predict missing parts of an incomplete triple: Help discover missing facts in a KB
Knowledge Bases (KBs) store a large amount of facts in the form of <head entity, relation, tail entity> triples: The Knowledge Base Completion (KBC) task aims to predict missing parts of an incomplete triple: Help discover missing facts in a KB
[]
GEM-SciDuet-train-126#paper-1344#slide-1
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-1
Vector Based Approach
A common approach to KBC is to model triples with a low dimension vector space, where Entity: represented by a that similar entities are close to each other) transformation of the vector space, which can be: Up to design choice
A common approach to KBC is to model triples with a low dimension vector space, where Entity: represented by a that similar entities are close to each other) transformation of the vector space, which can be: Up to design choice
[]
GEM-SciDuet-train-126#paper-1344#slide-2
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-2
Popular Types of Representations for Relation
Relation as vector translation Relation as linear Intuitively suitable for 1-to-1 relation Flexibly modeling N-to-N currency AUD relation of_country Australia country_of_film USD Australia US The Matrix US Finding Nemo same number of entities same distances within We follow July 18, 2018
Relation as vector translation Relation as linear Intuitively suitable for 1-to-1 relation Flexibly modeling N-to-N currency AUD relation of_country Australia country_of_film USD Australia US The Matrix US Finding Nemo same number of entities same distances within We follow July 18, 2018
[]
GEM-SciDuet-train-126#paper-1344#slide-3
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-3
Matrices are Difficult to Train
More parameters compared to entity vector Objective is highly non-convex
More parameters compared to entity vector Objective is highly non-convex
[]
GEM-SciDuet-train-126#paper-1344#slide-4
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-4
In this work
Propose jointly training relation matrices with an In order to reduce the high dimensionality Modified SGD with separated learning rates: In order to handle the highly non-convex Use modified SGD to enhance joint training with Other techniques for training relation matrices Achieve SOTA on standard KBC datasets
Propose jointly training relation matrices with an In order to reduce the high dimensionality Modified SGD with separated learning rates: In order to handle the highly non-convex Use modified SGD to enhance joint training with Other techniques for training relation matrices Achieve SOTA on standard KBC datasets
[]
GEM-SciDuet-train-126#paper-1344#slide-6
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-6
Joint Training with an Autoencoder
Represent relations as matrices in a bilinear model, can be Train an autoencoder to reconstruct relation matrix from training [Nickel+11, Guu+15, Tian+16] original reconstructed autoencoders in which the original input is not updated Reduce the high dimensionality of relation matrices Help learn composition of relations Not easy to carry out Training objective is highly non-convex Easily fall into local minimums
Represent relations as matrices in a bilinear model, can be Train an autoencoder to reconstruct relation matrix from training [Nickel+11, Guu+15, Tian+16] original reconstructed autoencoders in which the original input is not updated Reduce the high dimensionality of relation matrices Help learn composition of relations Not easy to carry out Training objective is highly non-convex Easily fall into local minimums
[]
GEM-SciDuet-train-126#paper-1344#slide-7
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-7
Modified SGD Separated Learning Rates
Different learning rates for different parts of our model The common practice for setting learning rates of SGD [Bottou, 2012]: Different parts in a neural network may have different learning rates : initial learning rate : coefficient of L2-regularizer : counter of trained examples KB: for KB-learning objective AE: for autoencoder objective Learning rates for frequent entities and relations can decay more quickly : counter of each entity : counter of each entity : counter of each relation July 18, 2018 NN usually can be decomposed into several parts, each one is convex when other parts are fixed NN joint co-training of many Natural to assume different learning rate for each part
Different learning rates for different parts of our model The common practice for setting learning rates of SGD [Bottou, 2012]: Different parts in a neural network may have different learning rates : initial learning rate : coefficient of L2-regularizer : counter of trained examples KB: for KB-learning objective AE: for autoencoder objective Learning rates for frequent entities and relations can decay more quickly : counter of each entity : counter of each entity : counter of each relation July 18, 2018 NN usually can be decomposed into several parts, each one is convex when other parts are fixed NN joint co-training of many Natural to assume different learning rate for each part
[]
GEM-SciDuet-train-126#paper-1344#slide-8
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-8
Learning Rates for Joint Training Autoencoder
objective trying to fit to AE is initialized randomly Does not make much sense to fit matrices to AE As the training proceeds KB and AE should
objective trying to fit to AE is initialized randomly Does not make much sense to fit matrices to AE As the training proceeds KB and AE should
[]
GEM-SciDuet-train-126#paper-1344#slide-9
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-9
Other Training Techniques
instead of pure Gaussian
instead of pure Gaussian
[]
GEM-SciDuet-train-126#paper-1344#slide-11
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-11
Datasets for Knowledge Base Completion
Dataset #Entity #Relation #Train #Valid #Test WN18RR: subset of WordNet [Miller 95] The previous WN18 and FB15k have an information leakage issue (refer our paper for test results) Evaluate models by how high the model ranks the gold
Dataset #Entity #Relation #Train #Valid #Test WN18RR: subset of WordNet [Miller 95] The previous WN18 and FB15k have an information leakage issue (refer our paper for test results) Evaluate models by how high the model ranks the gold
[]
GEM-SciDuet-train-126#paper-1344#slide-12
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-12
Base Model vs Joint Training with Autoencoder
MR MRR H10 MR MRR H10 BASE: The bilinear model Jointly train relation matrices MRR (Mean Reciprocal Rank): Joint training with an autoencoder improves upon the base bilinear model
MR MRR H10 MR MRR H10 BASE: The bilinear model Jointly train relation matrices MRR (Mean Reciprocal Rank): Joint training with an autoencoder improves upon the base bilinear model
[]
GEM-SciDuet-train-126#paper-1344#slide-13
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-13
Compared to Previous Research
MR MRR H10 MR MRR H10 Base model is competitive enough Our models achieved state-of-the-art results
MR MRR H10 MR MRR H10 Base model is competitive enough Our models achieved state-of-the-art results
[]
GEM-SciDuet-train-126#paper-1344#slide-14
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-14
What Does the Trained Autoencoder Look Like
Sparse coding of relation matrices Interpretable to some extent
Sparse coding of relation matrices Interpretable to some extent
[]
GEM-SciDuet-train-126#paper-1344#slide-15
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-15
Composition of Relations
Composition of two relations in a KB coincide with a third relation: Extracted 154 examples of compositional
Composition of two relations in a KB coincide with a third relation: Extracted 154 examples of compositional
[]
GEM-SciDuet-train-126#paper-1344#slide-16
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-16
Joint Training Helps Find Compositional Relations
If there is a composition Learned relation matrices to indeed comply with the composition Joint training with an autoencoder helps
If there is a composition Learned relation matrices to indeed comply with the composition Joint training with an autoencoder helps
[]
GEM-SciDuet-train-126#paper-1344#slide-17
1344
Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices -for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M 1 ·M 2 ≈ M 3 ). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ], "paper_content_text": [ "Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of head entity, relation, tail entity triples (e.g.", "The Matrix, country of film, Australia ), which could support a wide range of reasoning and question answering applications.", "The Knowledge Base Completion (KBC) task aims Figure 1 : In joint training, relation parameters (e.g.", "M 1 ) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings.", "to predict the missing part of an incomplete triple, such as Finding Nemo, country of film, ?", ", by reasoning from known facts stored in the KB.", "As a most common approach (Wang et al., 2017) , modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons.", "First, when dimension is low, entities modeled as vectors are forced to share parameters, so \"similar\" entities which participate in many relations in common get close to each other (e.g.", "Australia close to US).", "This could imply that an entity (e.g.", "US) \"type matches\" a relation such as country of film.", "Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from x, award winner, y to x, award nominated, y .", "Third, spatial positions might be used to implement composition of relations, as relations can be regarded as mappings from head to tail entities, and the composition of two maps can match a third (e.g.", "the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space.", "However, modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations.", "Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M 1 , M 2 ) matching a third (M 3 ) also justifies dimension reduction, because it implies a compositional constraint M 1 · M 2 ≈ M 3 that can be satisfied only by a lower dimension sub-manifold in the parameter space 1 .", "Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices , or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017) .", "However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g.", "the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited.", "In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1 ).", "During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well.", "We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2).", "Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints.", "1 It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices.", "Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process.", "We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4).", "We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank.", "We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3).", "Base Model A knowledge base (KB) is a set T of triples of the form h, r, t , where h, t ∈ E are entities and r ∈ R is a relation (e.g.", "The Matrix, country of film, Australia ).", "A relation r has its inverse r −1 ∈ R so that for every h, r, t ∈ T , we regard t, r −1 , h as also in the KB.", "Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete h, r, ?", "triple.", "Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts.", "The model we implement in this work represents entities h, t as d-dimension vectors u h , v t respectively, and relation r as a d×d matrix M r .", "If u h , v t are one-hot vectors with dimension d = |E| corresponding to each entity, one can take M r as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into h, r, ?", "is calculated by u h M r (with each nonzero entry corresponds to an answer).", "Thus, we have u h M r v t > 0 if and only if h, r, t ∈ T .", "This motivates us to use u h M r v t as a natural parameter to model plausibility of h, r, t , even in a low dimension space with d |E|.", "Thus, we define the score function as s(h, r, t) := exp(u h M r v t ) (1) for the basic model.", "This is similar to the bilinear model of Nickel et al.", "(2011) , except that we distinguish u h (the vector for head entities) from v t (the vector for tail entities).", "It has also been proposed in Tian et al.", "(2016) , but for modeling dependency trees rather than KBs.", "More generally, we consider composition of relations r 1 / .", ".", ".", "/r l to model paths in a KB (Guu et al., 2015) , as defined by r 1 , .", ".", ".", ", r l participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous.", "For example, a sequence of two facts The Matrix, country of film, Australia and Australia, currency of country, Australian Dollar form a path of composition country of film / currency of country, because the head of the second fact (i.e.", "Australia) coincides with the tail of the first.", "Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r 1 / .", ".", ".", "/r l , t) := exp(u h M r 1 · · · M r l v t ) to measure the plausibility of a path.", "It is explored in Guu et al.", "(2015) to learn a score function not only for single facts but also for paths.", "This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC.", "In this work, we conduct experiments both with and without compositional training.", "In order to learn parameters u h , v t , M r of the score function, we follow Tian et al.", "(2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012) objective.", "For each path (or triple) h, r 1 / .", ".", ".", ", t taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t * .", "Then, we maximize L 1 := path ln s(h, r 1 / .", ".", ".", ", t) k + s(h, r 1 / .", ".", ".", ", t) + noise ln k k + s(h, r 1 / .", ".", ".", ", t * ) as our KB-learning objective.", "Here, k is the number of noises generated for each path.", "When the score function is regarded as probability, L 1 represents the log-likelihood of \" h, r 1 / .", ".", ".", ", t being actual path and h, r 1 / .", ".", ".", ", t * being noise\".", "Maximizing L 1 increases the scores of actual paths and decreases the scores of noises.", "Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding.", "By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e.", "relation matrices).", "Formally, we define a vectorization m r for each relation matrix M r , and use it as input to the autoencoder.", "m r is defined as a reshape of M r flattened into a d 2 -dimension vector, and normalized such that m r = √ d. We define c r := ReLU(Am r ) (2) as the coding.", "Here A is a c × d 2 matrix with c d 2 , and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010) .", "We reconstruct the input from c r by multiplying a d 2 × c matrix B.", "We want Bc r to be more similar to m r than other relations.", "For this purpose, we define a similarity g(r 1 , r 2 ) := exp( 1 √ dc m r 1 Bc r 2 ), (3) which measures the length of Bc r 2 projected to the direction of m r 1 .", "In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r * for each relation r and maximize L 2 := r∈R ln g(r, r) k + g(r, r) + r * ∼R ln k k + g(r, r * ) as our reconstruction objective.", "Maximizing L 2 increases m r 's similarity with Bc r , and decreases it with Bc r * .", "During joint training, both L 1 and L 2 are simultaneously maximized, and the gradient ∇L 2 propagates to relation matrices as well.", "Since ∇L 2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices.", "In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold.", "Optimization Tricks Joint training with an autoencoder is not simple.", "Relation matrices receive updates from both ∇L 1 and ∇L 2 , but if they update ∇L 1 too much, the autoencoder has no effect; conversely, if they update ∇L 2 too often, all relation matrices crush into one cluster.", "Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse -in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization.", "Therefore, it is not surprising that a naive optimization of L 1 + L 2 does not work.", "After extensive pre-experiments, we have found some crucial settings for successful training.", "The most important \"magic\" is the scaling factor 1 √ dc in definition of the similarity function (3) , perhaps being combined with other settings as we discuss below.", "We have tried different factors 1, 1 √ d , 1 √ c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings.", "When the scaling factor is too small (e.g.", "1 dc ), all relations get almost the same coding; conversely if the factor is too large (e.g.", "1), all codings get very close to 0.", "The next important rule is to keep a balance between the updates coming from ∇L 1 and ∇L 2 .", "We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ ) := η 1 + ηλτ .", "(4) Here, η, λ are hyper-parameters and τ is a counter of processed data points.", "In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τ r for each relation r, counting \"number of updates\" instead of data points 2 .", "That is, whenever M r gets a nonzero update from a gradient calculation, τ r increases by 1.", "Furthermore, we use different hyper-parameters for different \"types of updates\", namely η 1 , λ 1 for updates coming from ∇L 1 , and η 2 , λ 2 for updates coming from ∇L 2 .", "Thus, let ∆ 1 be the partial gradient of ∇L 1 , and ∆ 2 the partial gradient of ∇L 2 , we update M r by α 1 (τ r )∆ 1 + α 2 (τ r )∆ 2 at each step, where α 1 (τ r ) := η 1 1 + η 1 λ 1 τ r , α 2 (τ r ) := η 2 1 + η 2 λ 2 τ r .", "The rule for setting η 1 , λ 1 and η 2 , λ 2 is that, η 2 should be much smaller than η 1 , because η 1 , η 2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆ 2 not making much sense; on the other hand, one has to choose λ 1 and λ 2 such that ∆ 1 /λ 1 and ∆ 2 /λ 2 are at the same scale, because the learning rates approach 1/(λ 1 τ r ) and 1/(λ 2 τ r ) respectively, as the training proceeds.", "In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α 1 (τ r )∆ 1 and α 2 (τ r )∆ 2 later.", "But how to estimate ∆ 1 and ∆ 2 ?", "It seems that we can approximately calculate their scales from initialization.", "In this work, we use i.i.d.", "Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are u h ≈ 1, v t ≈ 1, M r ≈ √ d, and BAm r ≈ √ dc.", "Thus, by calculating ∇L 1 and ∇L 2 using (1) and (3) , we have approximately ∆ 1 ≈ u h v t ≈ 1, and ∆ 2 ≈ 1 √ dc Bc r ≈ 1 √ dc BAm r ≈ 1.", "It suggests that, because of the scaling factor 1 √ dc in (3), we have ∆ 1 and ∆ 2 at the same scale, so we can set λ 1 = λ 2 .", "This might not be a mere coincidence.", "Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below.", "In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set.", "Normalization It is better to normalize relation matrices to M r = √ d during training.", "This might reduce fluctuations in entity vector updates.", "Regularizer It is better to minimize M r M r − 1 d tr(M r M r )I during training.", "This regularizer drives M r toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates.", "As a result, all relation matrices trained in this work are very close to orthogonal.", "Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random.", "The identity matrix I helps passing information from head to tail (Tian et al., 2016) .", "Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises.", "This is somehow counterintuitive compared to training word embeddings.", "Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017) .", "Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al.", "(2013) 's vector arithmetic method of solving word analogy tasks.", "Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N -to-N relations accurately (Wang et al., 2017) .", "Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation.", "The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices.", "Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations.", "On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011) , in which relations are naturally represented as analogue to the adjacency matrices (Sec.2).", "Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices.", "We inherit the basic model of RESCAL but draw additional training techniques from Tian et al.", "(2016) , and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3).", "This sends a message similar to Kadlec et al.", "(2017) , saying that training tricks might be as important as model designs.", "Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018) , whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling.", "Moreover, we additionally focus on leveraging composition in KBC.", "Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a) , our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research.", "In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training.", "Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011) , words and images (Silberer and Lapata, 2014) , or semantic roles (Titov and Khoddam, 2015) .", "It is also used for pretraining other deep neural networks (Erhan et al., 2010) .", "However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010) , is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al.", "(2017) .", "To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality.", "Jointly training an autoencoder is not simple because it takes non-stationary inputs.", "In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011) , in that they both set different learning rates for different parameters.", "While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update.", "We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well.", "Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013) , WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) .", "The statistical information of these datasets are shown in Table 1.", "WN18 collects word relations from WordNet (Miller, 1995) , and FB15k is taken from Freebase (Bollacker et al., 2008) ; both have filtered out low frequency entities.", "However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set.", "FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data.", "In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237.", "For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η 1 = 1/64, η 2 = 2 −14 and λ 1 = λ 2 = 2 −14 .", "The training batch size is 32 and the triples in each batch share the same head entity.", "We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP).", "When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0.", "For any incomplete triple h, r, ?", "in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈ E such that h, r, e does not appear in any of the training, validation, or test sets (Bordes et al., 2013) .", "Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation.", "Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10).", "Lower MR, higher MRR, and higher H10 indicate better performance.", "We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving.", "KBC Results The results are shown in Table 2 .", "We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥ BASE and JOINT+COMP > BASE+COMP).", "This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected 3 .", "When compositional training is enabled, 3 The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession −1 film_crew_role −1 film_release_region −1 film_language −1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film the system usually achieves better MR, though not always improves in other measures.", "The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018) .", "Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature.", "We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results.", "For re-experiments, we use Lin et al.", "(2015b) 's implementation 4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al.", "(2016b) 's implementation 5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant.", "We experimented with the default settings, and found that our models outperform most of them.", "Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) Table 2 : KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results.", "Bold numbers are the best in each sector, and ( * ) indicates the best of all.", "(Trouillon et al., 2016) and ConvE were previously the best results.", "Our models mostly outperform them.", "Other results include Kadlec et al.", "(2017) 's simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure.", "Our models have comparable results.", "Intuition and Insight What does the autoencoder look like?", "How does joint training affect relation matrices?", "We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints.", "Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions.", "This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations.", "Figure 2 shows some examples.", "In the first group of Figure 2 , we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization.", "These are high frequency relations joining two large categories (e.g.", "film and language), which probably constitute the skeleton of a KB.", "In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film.", "As for the relation currency of film budget, it has large code values at both dimensions.", "This kind of relation clustering also seems independent of initialization.", "Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them.", "Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably.", "For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have 6 a cosine similarity 0.338.", "Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP 7 (McInnes and Healy, 2018) to embed M r into a 2D plane 8 .", "We use relation matrices trained on FB15k-237, and compare models trained by the same number of epochs.", "The results are shown in Figure 3 .", "We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates.", "On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures.", "It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold.", "In addition, Figure 3d shows different structures against Figure 3b , which we conjecture could be related to compositional constraints discovered by compositional training.", "Compositional constraints In order to directly evaluate a model's ability to find compositional constraints, we extracted from FB15k-237 a list of (r 1 /r 2 , r 3 ) pairs such that r 1 /r 2 matches r 3 .", "Formally, the list is constructed as below.", "For any relation r, we define a content set C(r) as the set of (h, t) pairs such that h, r, t is a fact in the KB.", "Similarly, we define C(r 1 /r 2 ) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful.", "as the set of (h, t) pairs such that h, r 1 /r 2 , t is a path.", "We regard (r 1 /r 2 , r 3 ) as a compositional constraint if their content sets are similar; that is, if |C(r 1 /r 2 ) ∩ C(r 3 )| ≥ 50 and the Jaccard similarity between C(r 1 /r 2 ) and C(r 3 ) is ≥ 0.4.", "Then, after filtering out degenerated cases such as r 1 = r 3 or r 2 = r −1 1 , we obtained a list of 154 compositional constraints, e.g.", "(currency of country/country of film, currency of film budget).", "For each compositional constraint (r 1 /r 2 , r 3 ) in the list, we take the matrices M 1 , M 2 and M 3 corresponding to r 1 , r 2 and r 3 respectively, and rank M 3 according to its cosine similarity with M 1 M 2 , among all relation matrices.", "Then, we calculate MR and MRR for evaluation.", "We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M 2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2).", "The results are shown in Table 3 .", "We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation.", "We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints.", "Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M 3 and M 1 here, to the extent that M 3 and M 1 are so close that even a random M 2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility.", "Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed.", "Losses and Gains In the KBC task, where are the losses and what are the gains of different settings?", "With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training.", "Crucial settings for the base model It is noteworthy that our base model already achieves strong results.", "This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k-237 validation data.", "The most dramatic improvement comes from the regularizer that drives matrices to orthogonal.", "Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training.", "Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer.", "In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training.", "The results on FB15k-237 are shown in Table 5 .", "We can see that, as λ gets larger, MR improves much but MRR slightly drops.", "It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one.", "Yet, joint training improves base models even more as the paths get longer, especially in MR.", "It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training.", "Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder.", "We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank.", "Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training.", "We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task.", "Occasionally, a KBC test set may contain entities that never appear in the training data.", "Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018 ), our approach is described below.", "For an incomplete triple h, r, ?", "in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data.", "If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity.", "Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities.", "Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test.", "The results are shown in Table 6" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "5", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Base Model", "Joint Training with an Autoencoder", "Optimization Tricks", "Training the Base Model", "Related Works", "Experiments", "KBC Results", "Intuition and Insight", "Losses and Gains", "Conclusion" ] }
GEM-SciDuet-train-126#paper-1344#slide-17
Conclusion and Discussion
Task Knowledge Base Completion Approach Entities as low dimension vectors, relations as matrices Techniques Joint training relation matrices with autoencoder to reduce Modified SGD: different learning rates for different parts Separated learning rates for updating relation matrices Normalization, Regularization, Initialization of relation matrices Analysis Autoencoder learns sparse and interpretable low dimensional coding of Dimension reduction helps find compositional relations Discussion Modern NNs have a lot of parameters Joint training with an autoencoder may reduce dimensionality while the NN is functioning
Task Knowledge Base Completion Approach Entities as low dimension vectors, relations as matrices Techniques Joint training relation matrices with autoencoder to reduce Modified SGD: different learning rates for different parts Separated learning rates for updating relation matrices Normalization, Regularization, Initialization of relation matrices Analysis Autoencoder learns sparse and interpretable low dimensional coding of Dimension reduction helps find compositional relations Discussion Modern NNs have a lot of parameters Joint training with an autoencoder may reduce dimensionality while the NN is functioning
[]
GEM-SciDuet-train-127#paper-1346#slide-0
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-0
Situation Overview
Introduction Task Objectives Experiments Conclusion I Situation: deployed system (e.g. QA, MT ...) I Goal: improve system using human feedback I Plan: create a log Dlog of user-system interactions & improve system offline (safety) Here: Improve a Neural Semantic Parser
Introduction Task Objectives Experiments Conclusion I Situation: deployed system (e.g. QA, MT ...) I Goal: improve system using human feedback I Plan: create a log Dlog of user-system interactions & improve system offline (safety) Here: Improve a Neural Semantic Parser
[]
GEM-SciDuet-train-127#paper-1346#slide-1
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-1
Contrast to Previous Approaches
Introduction Task Objectives Experiments Conclusion parses Database A nswers Parser R ewards r1, ..., rs Comparison required data question x gold answer
Introduction Task Objectives Experiments Conclusion parses Database A nswers Parser R ewards r1, ..., rs Comparison required data question x gold answer
[]
GEM-SciDuet-train-127#paper-1346#slide-2
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-2
Our Approach
Introduction Task Objectives Experiments Conclusion parse y Database answer a train (x, y, r) Parser required data question x gold answer required data question x y1, parses ..., ys answer a Database A nswers Parser Rewards Parser log train r1, ..., rs C omparison User Feedback I No supervision: given an input, the gold output is unknown I Bandit: feedback is given for only one system output I Bias: log D is biased to the decisions of the deployed system Solution: Counterfactual / Off-policy Reinforcement Learning
Introduction Task Objectives Experiments Conclusion parse y Database answer a train (x, y, r) Parser required data question x gold answer required data question x y1, parses ..., ys answer a Database A nswers Parser Rewards Parser log train r1, ..., rs C omparison User Feedback I No supervision: given an input, the gold output is unknown I Bandit: feedback is given for only one system output I Bias: log D is biased to the decisions of the deployed system Solution: Counterfactual / Off-policy Reinforcement Learning
[]
GEM-SciDuet-train-127#paper-1346#slide-4
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-4
A natural language interface to OpenStreetMap
Introduction Task Objectives Experiments Conclusion I OpenStreetMap (OSM): geographical database I NLmaps v2: extension of the previous corpus, now totalling I example question: How many hotels are there in Paris? I correctness of answers are difficult to judge judge parses by making them human-understandable I feedback collection setup: automatically convert a parse to a set of statements humans judge the statements
Introduction Task Objectives Experiments Conclusion I OpenStreetMap (OSM): geographical database I NLmaps v2: extension of the previous corpus, now totalling I example question: How many hotels are there in Paris? I correctness of answers are difficult to judge judge parses by making them human-understandable I feedback collection setup: automatically convert a parse to a set of statements humans judge the statements
[]
GEM-SciDuet-train-127#paper-1346#slide-5
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-5
Example Feedback Formula
Introduction Task Objectives Experiments Conclusion
Introduction Task Objectives Experiments Conclusion
[]
GEM-SciDuet-train-127#paper-1346#slide-7
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-7
Counterfactual Learning
Introduction Task Objectives Experiments Conclusion collected log Dlog {(xt yt , t)}nt=1 with I xt : input I yt : most likely output of deployed system I t 0]: loss (i.e. negative reward) received from user Deterministic Propensity Matching (DPM) I minimize the expected risk for a target policy w I improve w using (stochastic) gradient descent I high variance use multiplicative control variate
Introduction Task Objectives Experiments Conclusion collected log Dlog {(xt yt , t)}nt=1 with I xt : input I yt : most likely output of deployed system I t 0]: loss (i.e. negative reward) received from user Deterministic Propensity Matching (DPM) I minimize the expected risk for a target policy w I improve w using (stochastic) gradient descent I high variance use multiplicative control variate
[]
GEM-SciDuet-train-127#paper-1346#slide-8
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-8
Multiplicative Control Variate
Introduction Task Objectives Experiments Conclusion I for random variables X and Y , with Y the expectation of Y RHS has lower variance if Y positively correlates with X DPM with Reweighting (DPM+R) I reduces variance but introduces a bias of order O( 1n ) that decreases as n increases n should be as large as possible I Problem: in stochastic minibatch learning, n is too small
Introduction Task Objectives Experiments Conclusion I for random variables X and Y , with Y the expectation of Y RHS has lower variance if Y positively correlates with X DPM with Reweighting (DPM+R) I reduces variance but introduces a bias of order O( 1n ) that decreases as n increases n should be as large as possible I Problem: in stochastic minibatch learning, n is too small
[]
GEM-SciDuet-train-127#paper-1346#slide-9
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-9
One Step Late OSL Reweighting
Introduction Task Objectives Experiments Conclusion Perform gradient descent updates & reweighting asynchronously I evaluate reweight sum R on the entire log of size n using I update using minibatches of size m, m I periodically update R retains all desirable properties n
Introduction Task Objectives Experiments Conclusion Perform gradient descent updates & reweighting asynchronously I evaluate reweight sum R on the entire log of size n using I update using minibatches of size m, m I periodically update R retains all desirable properties n
[]
GEM-SciDuet-train-127#paper-1346#slide-10
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-10
Token Level Feedback
Introduction Task Objectives Experiments Conclusion
Introduction Task Objectives Experiments Conclusion
[]
GEM-SciDuet-train-127#paper-1346#slide-12
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-12
Experimental Setup
Introduction Task Objectives Experiments Conclusion I sequence-to-sequence neural network Nematus I deployed system: pre-trained on 2k question-parse pairs humans judged 1k system outputs I average time to judge a parse: 16.4s simulated feedback for 23k system outputs I token-wise comparison to gold parse I bandit-to-supervised conversion (B2S): all instances in log with reward 1 are used as supervised training
Introduction Task Objectives Experiments Conclusion I sequence-to-sequence neural network Nematus I deployed system: pre-trained on 2k question-parse pairs humans judged 1k system outputs I average time to judge a parse: 16.4s simulated feedback for 23k system outputs I token-wise comparison to gold parse I bandit-to-supervised conversion (B2S): all instances in log with reward 1 are used as supervised training
[]
GEM-SciDuet-train-127#paper-1346#slide-13
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-13
Experimental Results
Introduction Task Objectives Experiments Conclusion Human Feedback (1k) Large-Scale Simulated Feedback (23k)
Introduction Task Objectives Experiments Conclusion Human Feedback (1k) Large-Scale Simulated Feedback (23k)
[]
GEM-SciDuet-train-127#paper-1346#slide-14
1346
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282 ], "paper_content_text": [ "Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task.", "The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing.", "Recent work (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed.", "A semantic parser produces multiple parses per question and corresponding answers are obtained.", "These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap.", "The parser is then guided towards correct parses using the REIN-FORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1 ).", "However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain.", "For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects.", "For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as \"near\" or \"in walking distance\" and thus need to allow for fuzziness in the answers as well.", "A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback 1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user.", "In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question.", "The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers.", "This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser.", "Commercial systems can easily log large amounts of interaction data between users and system.", "Once sufficient data has been collected, the log can then be used to improve the parser.", "This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1) .", "In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: Figure 1 : Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available.", "The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer.", "Right: In our setup, a question does not have an associated gold answer.", "The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback.", "Such triplets are collected in a log which can be used for offline training of a semantic parser.", "This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized.", "First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer.", "To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human.", "This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually.", "We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average.", "This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets.", "Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing.", "A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser.", "The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset.", "Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse.", "Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models.", "This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization.", "Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017) .", "Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013) ; Artzi and Zettlemoyer (2013) ; inter alia).", "More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al.", "(2017) ; Mou et al.", "(2017) ; Peng et al.", "(2017) ; inter alia).", "However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system.", "It thus differs from a bandit setup where we assume that a reward is available for only one structure.", "Our work most closely resembles the work of Iyer et al.", "(2017) who do make the assumption of only being able to judge one output.", "They improve their parser using simulated and real user feedback.", "Parses with negative feedback are given to experts to obtain the correct parse.", "Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective.", "We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries.", "Yih et al.", "(2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses.", "However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system.", "From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a) , or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016) .", "Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks.", "Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al.", "(2017b) .", "Following their insight, we also assume the logs were created deterministically, i.e.", "the logging policy always outputs the most likely sequence.", "Their framework was applied to statistical machine translation using linear models.", "We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain.", "Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015) .", "We use the settings of Sennrich et al.", "(2017) , where an input sequence x = x 1 , x 2 , .", ".", ".", "x |x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector h i = [ − → h i ; ← − h i ] where the former is created by a forward pass over the input, and the latter by a backward pass.", "− → h i is obtained by recur- (Chung et al., 2014) , and ← − h i is computed analogously.", "The input sequence is reduced to a single vector c = g({h 1 , .", ".", ".", ", h |x| }) which serves as the initialization of the decoder RNN.", "g calculates the average over all vectors h i .", "At each time step t the decoder state is set by s t = q(s t−1 , y t−1 , c t ).", "q is a conditional GRU with an attention mechanism and c t is the context vector computed by the attention mechanism.", "Given an output vocabulary V y and the decoder state s t = {s 1 , .", ".", ".", ", s |Vy| }, a softmax output layer defines a probability distribution over V y and the probability for a token y j is: sively computing f (x i , − → h i−1 ) where f is a Gated Recurrent Unit (GRU) π w (y j = t o |y <j , x) = exp(s to ) |Vy| v=1 exp(s tv ) .", "(1) The model π w can be seen as parameterized policy over an action space defined by the target language vocabulary.", "The probability for a full output sequence y = y 1 , y 2 , .", ".", ".", "y |y| is defined by π w (y|x) = |y| j=1 π w (y j |y <j , x).", "(2) In our case, output sequences are linearized machine readable parses, called queries in the following.", "Given supervised data D sup = {(x t ,ȳ t )} n t=1 of question-query pairs, whereȳ t is the true target query for x t , the neural network can be trained using SGD and a cross-entropy (CE) objective: L CE = − 1 n n t=1 |ȳ| j=1 log π w (ȳ j |ȳ <j , x).", "(3) Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives.", "We assume a policy π w that, given an input x ∈ X , defines a conditional probability distribution over possible outputs y ∈ Y(x).", "Furthermore, we assume that the policy is parameterized by w and its gradient can be derived.", "In this work, π w is defined by the sequence-to-sequence model described in Section 3.", "We also assume that the model decomposes over individual output tokens, i.e.", "that the model produces the output token by token.", "The counterfactual learning problem can be described as follows: We are given a data log of ∇ wRDPM = 1 n n t=1 δ t π w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ w log π w (y t |x t ) − 1 n n u=1π w (y u |x u )∇ log π w (y u |x u ))].", "∇ wRDPM+OSL = 1 m m t=1 δ tπw,w (y t |x t )∇ w log π w (y t |x t ).", "∇ wRDPM+T = 1 n n t=1 |y| j=1 δ j π w (y j |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "∇ wRDPM+T+OSL = 1 m m t=1 |y| j=1 δ jπw,w (y t |x t ) |y| j=1 ∇ w log π w (y j |x t ).", "triples D log = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system under policy π 0 , and loss values δ t ∈ [−1, 0] 2 were observed for the generated data points.", "Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy π w given the data log D log .", "In case of deterministic logging, outputs are logged with propensity π 0 (y t |x t ) = 1, t = 1, .", ".", ".", ", n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b) , without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983) : R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).", "(4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a) .", "This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016) .", "The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al.", "(2017b) : R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (5) = 1 n n t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1 .", "Reweighting in Stochastic Learning.", "As shown in Swaminathan and Joachims (2015b) and Lawrence et al.", "(2017a) , reweighting over the entire data log D log is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs.", "This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n ) that decreases as n increases (Kong, 1992) .", "The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks.", "We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting.", "The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990) .", "In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions.", "In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w from some previous iteration.", "This allows us to perform gradient descent updates and reweighting asynchronously.", "Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate.", "The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes π w,w with respect to w for a minibatch of size m, with reweighting over the entire log of size n under parameters w : R DPM+OSL (π w ) = 1 m m t=1 δ tπw,w (y t |x t ) (6) = 1 m m t=1 δ t π w (y t |x t ) 1 n n t=1 π w (y t |x t ) .", "If the renormalization is updated periodically, e.g.", "after every validation step, renormalizations under w or w are not much different and will not hamper convergence.", "Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance.", "The gradient for learning with OSL updates is given in Table 1 .", "Token-Level Rewards.", "For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not.", "In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct.", "Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries.", "Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: R DPM+T (π w ) = 1 n n t=1   |y| j=1 δ j π w (y j |x t )   .", "(7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): R DPM+T+OSL (π w ) = 1 m m t=1 |y| j=1 δ j π w (y j |x t ) 1 n n t=1 π w (y t |x t ) .", "(8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1 .", "Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world.", "A point of interest consists of one or more associated GPS points.", "Further relevant information may be added at the discretion of the volunteer in the form of tags.", "Each tag consists of a key and an associated value, for example \"tourism : hotel\".", "The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database.", "It pairs English questions with machine readable parses, i.e.", "queries that can be executed against OSM.", "Human Feedback Collection.", "The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data.", "The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database.", "It is thus not easily possible to find experts that could provide correct queries.", "It is equally difficult to ask workers at crowdsourcing platforms for the correct answer.", "For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors.", "For example, for the question \"How many hotels are there in Paris?\"", "there are 951 hotels annotated in the OSM database.", "Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human.", "The question and the created block of statements are embedded in a user interface with a form that can be filled out by users.", "Each statement is accompanied by a set of radio buttons where a user can select either \"Yes\" or \"No\".", "For a screenshot of the interface and an example see Figure 2 .", "In total there are 8 different types of statements.", "The presence of certain tokens in a query trigger different statement types.", "For example, the token \"area\" triggers the statement type \"Town\".", "The statement is then populated with the corresponding information from the query.", "In the case of \"area\", the following OSM value is used, e.g.", "\"Paris\".", "With this, the meaning of every query can be captured by a set of human-understandable statements.", "For a full overview of all statement types and their triggers see section B of the supplementary material.", "OSM tags and keys are generally understandable.", "For example, the correct OSM tag for \"hotels\" is \"tourism : hotel\" and when searching for websites, the correct question type key would be \"website\".", "Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki 3 and extract the description for this tag or key.", "The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse.", "If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making.", "Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query.", "These tokens then receive negative or positive feedback based on the feedback the user provided for that statement.", "Corpus Extension.", "Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.", "4 The basis for the extension is a hand-written, online freely available list 5 that links natural language expressions such as \"cash machine\" to appropriate OSM tags, in this case \"amenity : atm\".", "Using the list, we generate for each unique expression-tag pair a set of question-query pairs.", "These latter pairs contain 3 https://wiki.openstreetmap.org/ 4 The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper.", "(Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags.", "several placeholders which will be filled automatically in a second step.", "To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK.", "$POI is the placeholder for a point of interest.", "We sample it from the list of objects which are located in the prior sampled city and which have a name key.", "The corresponding value belonging to the name key will be used to fill this spot.", "The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language.", "On the natural language side they corresponded to \"How many\", \"Where\", \"Is there\" and $KEY.", "$KEY is a further parameter belonging to the primary question operator FINDKEY.", "It can be filled by any OSM key, such as name, website or height.", "To ensure that there will be an answer for the generated query, we first ran a query with the current tag (\"amenity : atm\") to find all objects fulfilling this requirement in the area of the already sampled city.", "From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key.", "For $DIST we chose between the pre-defined options for walking distance and within city distance.", "The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag \"amenity : atm\") will be located.", "If the walking distance was selected, we added \"in walking distance\" to the question.", "Otherwise no extra text was added to the question, assuming the within city distance to be the default.", "This sampling process was repeated twice.", "Table 2 presents the corpus statistics, comparing NLMAPS to our extension.", "The automatic extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude.", "Consequently the number of tokens and types increase in a similar vein.", "However, the average sentence length drops.", "This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures.", "However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags.", "With 6,582 distinct tags compared to the previous 477, this was clearly successful.", "Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags.", "An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A.", "Experiments General Settings.", "In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017) .", "Following the method used by Haas and Riezler (2016) , we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure.", "For example, \"query(west(area(keyval('name','Paris')), nwr(keyval('railway','station'))),qtype(count))\" becomes \"query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0\".", "The SGD optimizer used is ADADELTA (Zeiler, 2012).", "The model employs 1,024 hidden units and word embeddings of size 1,000.", "The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0.", "The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained.", "F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80.", "The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12.", "We report the F1 score which is the harmonic mean of precision and recall.", "Recall is defined as the percentage of fully correct answers divided by the set size.", "Precision is the percentage of correct answers out of the set of answers with non-empty strings.", "Statistical significance between models is measured using an approximate randomization test (Noreen, 1989) .", "Baseline Parser & Log Creation.", "Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions.", "For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus.", "We will call this dataset D sup .", "Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective.", "It obtains an F1 score of 57.45% and serves as the logging policy π 0 .", "Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively.", "This leaves a set of 22,765 question-query pairs.", "The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser.", "We refer to this dataset as D log .", "To collect human feedback, we take the first 1,000 questions from D log and use π 0 to parse these questions to obtain one output query for each.", "5 question-query pairs are discarded because the suggested query is invalid.", "For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5.", "We recruited 9 users to provide feedback for these question-query pairs.", "The resulting log is referred to as D human .", "Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system.", "In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair.", "Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it.", "To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds.", "The vast majority (728 instances) are completed in less than 10 seconds.", "Learning from Human Bandit Feedback.", "An analysis of D human shows that for 531 queries all corresponding statements were marked as correct.", "We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective.", "We call this baseline banditto-supervised conversion (B2S).", "Furthermore, we present experimental results using the log D human for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8.", "For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise.", "In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise.", "For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed.", "Results, averaged over 3 runs, are reported in Table 3 .", "The B2S model can slightly improve upon the baseline but not significantly.", "DPM improves further, significantly beating the baseline.", "Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup.", "By moving to token-level rewards, it is possible to learn from partially correct queries.", "These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models.", "Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline.", "It beats both the baseline and the B2S model by a significant margin.", "Learning from Large-Scale Simulated Feedback.", "We want to investigate whether the results scale if a larger log is used.", "Thus, we use π 0 to parse all 22,765 questions from D log and obtain for each an output query.", "For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise.", "We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise.", "An analysis of D log shows that 46.27% of the queries have a sequence level reward of 1 and are Table 4 .", "We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points.", "Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline.", "Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline.", "Token-level rewards are again crucial to beat the B2S baseline significantly.", "DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further.", "tained the correct answer and the baseline system did not (see Table 5 ).", "The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query.", "For example, for the question \"closest Florist from Manchester in walking distance\" the baseline system chose the tag \"landuse : retail\" in the query, whereas DPM+T+OSL learnt that the correct tag is \"shop : florist\".", "In some cases, the question type had to be corrected, e.g.", "the baseline's suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number.", "Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g.", "by searching for a point of interest in the east of an area rather than the south.", "Analysis OSL Update Variation.", "Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant.", "Results are reported in Table 6 .", "Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL.", "The more frequent update strategies, once or four times per epoch, are more effective.", "Both strategies reduce variance further and lead to higher F1 scores.", "Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1.", "It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant.", "Updating after every minibatch is infeasible as it slows down training too much.", "Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours.", "Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback.", "This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users.", "We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks.", "Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries.", "We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback.", "In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards.", "Finally, our approach to collecting feedback can also be transferred to other domains.", "For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries.", "This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing", "Counterfactual Learning from Deterministic Bandit Logs", "Semantic Parsing in the OpenStreetMap Domain", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-127#paper-1346#slide-14
Take Away
Introduction Task Objectives Experiments Conclusion I safely improve a system by collecting interaction logs I applicable to any task if the underlying model is differentiable I DPM+OSL: new objective for stochastic minibatch learning Improving a Semantic Parser I collect feedback by making parses human-understandable I judging a parse is often easier & faster than formulating a I large question-parse corpus for QA in the geographical domain I integrate feedback form in the online NL interface to OSM
Introduction Task Objectives Experiments Conclusion I safely improve a system by collecting interaction logs I applicable to any task if the underlying model is differentiable I DPM+OSL: new objective for stochastic minibatch learning Improving a Semantic Parser I collect feedback by making parses human-understandable I judging a parse is often easier & faster than formulating a I large question-parse corpus for QA in the geographical domain I integrate feedback form in the online NL interface to OSM
[]
GEM-SciDuet-train-128#paper-1349#slide-0
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-0
Noun Compounds
Two or more nouns function as a unit to create a new concept hot dog, hot dog bun, hot dog bun package. . . Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 We focus on two-word compounds Express implicit relationship between the constituent nouns: apple cake: cake made of apples birthday cake: cake eaten on a birthday They are like text compression devices [Nakov, 2013] Were pretty good at decompressing them!
Two or more nouns function as a unit to create a new concept hot dog, hot dog bun, hot dog bun package. . . Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 We focus on two-word compounds Express implicit relationship between the constituent nouns: apple cake: cake made of apples birthday cake: cake eaten on a birthday They are like text compression devices [Nakov, 2013] Were pretty good at decompressing them!
[]
GEM-SciDuet-train-128#paper-1349#slide-1
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-1
We are good at Interpreting Noun Compounds
We easily interpret noun-compounds Even when we see them for the first time Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 What is a parsley cake? cake eaten on a parsley?
We easily interpret noun-compounds Even when we see them for the first time Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 What is a parsley cake? cake eaten on a parsley?
[]
GEM-SciDuet-train-128#paper-1349#slide-2
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-2
Generalizing Existing Knowledge
What can cake be made of? Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Parsley (sort of) fits into this distribution Similar to selectional preferences [Pantel et al., 2007]
What can cake be made of? Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Parsley (sort of) fits into this distribution Similar to selectional preferences [Pantel et al., 2007]
[]
GEM-SciDuet-train-128#paper-1349#slide-4
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-4
Noun Compound Interpretation Tasks
Compositionality Prediction is spelling bee related to bee? Relation Classification apple cake ingredient birthday cake time Paraphrasing cake made of apples cake eaten on a birthday Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
Compositionality Prediction is spelling bee related to bee? Relation Classification apple cake ingredient birthday cake time Paraphrasing cake made of apples cake eaten on a birthday Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
[]
GEM-SciDuet-train-128#paper-1349#slide-5
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-5
Noun Compound Paraphrasing
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
[]
GEM-SciDuet-train-128#paper-1349#slide-6
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-6
Motivation
Given a noun-compound w1w2, express the relation between the head w2 and the modifier w1 with multiple prepositional and verbal paraphrases [Nakov and Hearst, 2006] olive oil [w2] extracted from [w1] ground attack [w2] from [w1] boat whistle [w2] located in [w1] baby oil Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
Given a noun-compound w1w2, express the relation between the head w2 and the modifier w1 with multiple prepositional and verbal paraphrases [Nakov and Hearst, 2006] olive oil [w2] extracted from [w1] ground attack [w2] from [w1] boat whistle [w2] located in [w1] baby oil Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
[]
GEM-SciDuet-train-128#paper-1349#slide-7
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-7
Evaluation Setting
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 A ranking rather than a retrieval task Systems get a list of noun compounds Extract paraphrases from free text Evaluated for correlation with human judgments Gold paraphrase score: how many annotators suggested it?
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 A ranking rather than a retrieval task Systems get a list of noun compounds Extract paraphrases from free text Evaluated for correlation with human judgments Gold paraphrase score: how many annotators suggested it?
[]
GEM-SciDuet-train-128#paper-1349#slide-8
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-8
Prior Methods 1 2
Based on constituent co-occurrences: cake made of apple Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Many unseen compounds, no paraphrases in the corpus rare: parsley cake or highly lexicalized: ice cream Many compounds with just a few paraphrases Can we infer cake containing apple given cake made of apple? Prior work provides partial solutions to either (1) or (2)
Based on constituent co-occurrences: cake made of apple Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Many unseen compounds, no paraphrases in the corpus rare: parsley cake or highly lexicalized: ice cream Many compounds with just a few paraphrases Can we infer cake containing apple given cake made of apple? Prior work provides partial solutions to either (1) or (2)
[]
GEM-SciDuet-train-128#paper-1349#slide-9
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-9
Prior Methods 2 2
Represent NC by applying a function to its constituent distributional vectors: vec(apple cake) = f (vec(apple), vec(cake)) Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict paraphrase templates given NC vector Generalizes for similar unseen NCs, e.g. pear tart Learn is-a relations between paraphrases: Our solution: multi-task learning to address both problems
Represent NC by applying a function to its constituent distributional vectors: vec(apple cake) = f (vec(apple), vec(cake)) Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict paraphrase templates given NC vector Generalizes for similar unseen NCs, e.g. pear tart Learn is-a relations between paraphrases: Our solution: multi-task learning to address both problems
[]
GEM-SciDuet-train-128#paper-1349#slide-10
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-10
Model
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
[]
GEM-SciDuet-train-128#paper-1349#slide-11
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-11
Multi task Reformulation
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict a paraphrase p for a given NC w1w2: What is the relation between apple and cake? Predict w1 given a paraphrase p and w2: What can cake be made of? What can be made of apple?
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict a paraphrase p for a given NC w1w2: What is the relation between apple and cake? Predict w1 given a paraphrase p and w2: What can cake be made of? What can be made of apple?
[]
GEM-SciDuet-train-128#paper-1349#slide-12
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-12
Main Task 1 Predicting Paraphrases
What is the relation between apple and cake? Encode placeholder [p] in cake [p] apple using biLSTM Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict an index in the paraphrase vocabulary Fixed word embeddings, learned placeholder embeddings (1) Generalizes NCs: pear tart expected to yield similar results
What is the relation between apple and cake? Encode placeholder [p] in cake [p] apple using biLSTM Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict an index in the paraphrase vocabulary Fixed word embeddings, learned placeholder embeddings (1) Generalizes NCs: pear tart expected to yield similar results
[]
GEM-SciDuet-train-128#paper-1349#slide-13
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-13
Helper Task 2 Predicting Missing Constituents
What can cake be made of? Encode placeholder in cake made of [w1] using biLSTM Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict an index in the word vocabulary [w2] containing [w1] expected to yield similar results
What can cake be made of? Encode placeholder in cake made of [w1] using biLSTM Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Predict an index in the word vocabulary [w2] containing [w1] expected to yield similar results
[]
GEM-SciDuet-train-128#paper-1349#slide-14
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-14
Training Data
Collected from Google N-grams Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Templates of POS tags (e.g. [w2] verb prep [w1]) Weighting by frequency and length
Collected from Google N-grams Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Templates of POS tags (e.g. [w2] verb prep [w1]) Weighting by frequency and length
[]
GEM-SciDuet-train-128#paper-1349#slide-15
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-15
Evaluation
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
[]
GEM-SciDuet-train-128#paper-1349#slide-16
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-16
Ranking Model
Predict top k paraphrases for each noun compound Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Learn to re-rank the paraphrases to better correlate with human judgments SVM pair-wise ranking with the following features: POS tags in the paraphrase Prepositions in the paraphrase Similarity to predicted paraphrase
Predict top k paraphrases for each noun compound Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Learn to re-rank the paraphrases to better correlate with human judgments SVM pair-wise ranking with the following features: POS tags in the paraphrase Prepositions in the paraphrase Similarity to predicted paraphrase
[]
GEM-SciDuet-train-128#paper-1349#slide-17
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-17
Results
conservative models rewards only precision rewards recall and precision Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
conservative models rewards only precision rewards recall and precision Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018
[]
GEM-SciDuet-train-128#paper-1349#slide-18
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-18
Error Analysis False Positive
Valid, missing from gold-standard Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 (life of women in community) E.g., n-grams dont respect syntactic structure: rinse away the oil from baby s head oil from baby (force of coalition forces)
Valid, missing from gold-standard Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 (life of women in community) E.g., n-grams dont respect syntactic structure: rinse away the oil from baby s head oil from baby (force of coalition forces)
[]
GEM-SciDuet-train-128#paper-1349#slide-19
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-19
Error Analysis False Negative
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 (mutation of a gene)
Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 (mutation of a gene)
[]
GEM-SciDuet-train-128#paper-1349#slide-20
1349
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 ], "paper_content_text": [ "Introduction Noun-compounds hold an implicit semantic relation between their constituents.", "For example, a 'birthday cake' is a cake eaten on a birthday, while 'apple cake' is a cake made of apples.", "Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013) .", "The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g.", "Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g.", "Hendrickx et al., 2013) .", "Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound's constituents as a source of explicit relation paraphrases (e.g.", "Wubben, 2010; Versley, 2013) .", "Such methods are unable to generalize for unseen noun-compounds.", "Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007) , and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge.", "For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job.", "We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds.", "Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting '[w 2 ] made of [w 1 ]' given 'apple cake'), or a missing constituent given a combination of paraphrase and noun-compound (predicting 'apple' given 'cake made of [w 1 ]').", "Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization.", "Interpreting 'parsley cake' effectively reduces to identifying paraphrase templates whose \"selectional preferences\" (Pantel et al., 2007) on each constituent fit 'parsley' and 'cake'.", "A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4).", "We evaluate our model on both the paraphrasing and the classification tasks (Section 5).", "On both tasks, the model's ability to generalize leads to improved performance in challenging evaluation settings.", "1 2 Background Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations.", "Early work on the task leveraged information derived from lexical resources and corpora (e.g.", "Girju, 2007; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010) .", "More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g.", "Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012) .", "In the second step, the noun-compound representations are used as feature vectors for classification (e.g.", "Dima and Hinrichs, 2015; Dima, 2016) .", "The datasets for this task differ in size, number of relations and granularity level (e.g.", "Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010) .", "The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007) .", "Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011) , business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business).", "Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases.", "For example, apple cake is a cake from, made of, or which contains apples.", "The suggestion was embraced and resulted in two SemEval tasks.", "SemEval 2010 task 9 (Butnariu et al., 2009 ) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments.", "In SemEval 2013 task 4 (Hendrickx et al., 2013) , systems were expected to provide a ranked list of paraphrases extracted from free text.", "Various approaches were proposed for this task.", "Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases.", "Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017) , while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013) .", "One of the challenges of this approach is the ability to generalize.", "If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases.", "It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few.", "The approach of Van de Cruys et al.", "(2013) somewhat generalizes for unseen noun-compounds.", "They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus.", "Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases.", "For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife.", "In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al.", "(2013) learned \"is-a\" relations between paraphrases from the co-occurrences of various paraphrases with each other.", "For example, the specific '[w 2 ] extracted from [w 1 ]' template (e.g.", "in the context of olive oil) generalizes to '[w 2 ] made from [w 1 ]'.", "One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases.", "Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning.", "However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text.", "Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, (23 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017) .", "If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead.", "Another related task is Open Information Extraction (Etzioni et al., 2008) , whose goal is to extract relational tuples from text.", "Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions.", "Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from \"NIH director Francis Collins\".", "Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the Word-Net definition \"industry that produces and delivers oil\".", "This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don't have them (Nakov, 2013) .", "Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent.", "Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases.", "Section 3.2 details the creation of training data, and Section 3.3 describes the model.", "Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w 2 , p, w 1 ), and we train the model on 3 subtasks: (1) predict p given w 1 and w 2 , (2) predict w 1 given p and w 2 , and (3) predict w 2 given p and w 1 .", "Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple).", "Effectively, the model is trained to answer questions such as \"what can cake be made of?", "\", \"what can be made of apple?", "\", and \"what are the possible relationships between cake and apple?\".", "The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity.", "Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g.", "'is made of' and 'made of'), or from shared constituents, e.g.", "'[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry' can share [w 1 ] = insurance and [w 2 ] = company .", "This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus.", "Training Data We collect a training set of (w 2 , p, w 1 , s) examples, where w 1 and w 2 are constituents of a nouncompound w 1 w 2 , p is a templated paraphrase, and s is the score assigned to the training instance.", "2 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011) .", "To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as '[w 2 ] VERB PREP [w 1 ]', we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases.", "Corpus.", "Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) .", "The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web.", "We search for n-grams following the extracted patterns and containing w 1 and w 2 's lemmas for some noun-compound in the set.", "We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases.", "For example, from the 5-gram 'cake made of sweet apples' we extract the training example (cake, made of, apple).", "We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances.", "Weighting.", "Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases.", "For instance, 'cake of apples' may also appear in the corpus, although with lower frequency than 'cake from apples'.", "As also noted by Surtani et al.", "(2013) , the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g.", "count('cake made of apples') count('cake of apples')).", "We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length.", "Negative Samples.", "We add 1% of negative samples by selecting random corpus words w 1 and w 2 that do not co-occur, and adding an example (w 2 , [w 2 ] is unrelated to [w 1 ], w 1 , s n ), for some predefined negative samples score s n .", "Similarly, for a word w i that did not occur in a paraphrase p we add (w i , p, UNK, s n ) or (UNK, p, w i , s n ), where UNK is the unknown word.", "This may help the model deal with non-compositional noun-compounds, where w 1 and w 2 are unrelated, rather than forcibly predicting some relation between them.", "Model For a training instance (w 2 , p, w 1 , s), we predict each item given the encoding of the other two.", "Encoding.", "We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014) , which are fixed during training.", "In addition, we learn embeddings for the special words [w 1 ], [w 2 ], and [p] , which are used to represent a missing component, as in \"cake made of [w 1 ]\", \"[w 2 ] made of apple\", and \"cake [p] apple\".", "For a missing component x ∈ {[p], [w 1 ], [w 2 ] } surrounded by the sequences of words v 1:i−1 and v i+1:n , we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005) , and take the ith output vector as representing the missing component: bLS(v 1:i , x, v i+1:n ) i .", "In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v 1:i−1 and the subsequent words v i+1:n .", "Prediction.", "We predict a distribution of the vocabulary of the missing component, i.e.", "to predict w 1 correctly we need to predict its index in the word vocabulary V w , while the prediction of p is from the vocabulary of paraphrases in the training set, V p .", "We predict the following distributions: p = softmax(W p · bLS( w 2 , [p], w 1 ) 2 ) w 1 = softmax(W w · bLS( w 2 , p 1:n , [w 1 ]) n+1 ) w 2 = softmax(W w · bLS([w 2 ], p 1:n , w 1 ) 1 ) (1) where W w ∈ R |Vw|×2d , W p ∈ R |Vp|×2d , and d is the embeddings dimension.", "During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score.", "During inference, we predict the missing components by picking the best scoring index in each distribution: 3 p i = argmax(p) w 1i = argmax(ŵ 1 ) w 2i = argmax(ŵ 2 ) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters.", "Subtasks (2) and (3) also share W w , the MLP that predicts the index of a word.", "Table 1 : Examples of top ranked predicted components using the model: predicting the paraphrase given w 1 and w 2 (left), w 1 given w 2 and the paraphrase (middle), and w 2 given w 1 and the paraphrase (right).", "Implementation Details.", "The model is implemented in DyNet (Neubig et al., 2017) .", "We dedicate a small number of noun-compounds from the corpus for validation.", "We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs.", "We use Momentum SGD (Nesterov, 1983) , and set the batch size to 10 and the other hyper-parameters to their default values.", "Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs.", "Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word.", "The examples in the table are from among the top 10 ranked predictions for each componentpair.", "We note that most of the (w 2 , paraphrase, w 1 ) triplets in the table do not occur in the training data, but are rather generalized from similar examples.", "For example, there is no training instance for \"company in the software industry\" but there is a \"firm in the software industry\" and a company in many other industries.", "While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases.", "The list often contains multiple semanticallysimilar paraphrases, such as '[w 2 ] involved in [w 1 ]' and '[w 2 ] in [w 1 ] industry'.", "This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents.", "To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2 .", "The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to many shared constituents.", "For instance, 'with', 'from', and 'out of' can all describe the relation between food words and their ingredients.", "Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks.", "The main evaluation is on retrieving and ranking paraphrases ( §5.1).", "For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations ( §5.2), although it wasn't designed for this task.", "Paraphrasing Task Definition.", "The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases.", "In SemEval 2013 Task 4, 4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators.", "Model.", "For a given noun-compound w 1 w 2 , we first predict the k = 250 most likely paraphrases: p 1 , ...,p k = argmax kp , wherep is the distribution of paraphrases defined in Equation 1.", "While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments.", "We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments.", "We follow Herbrich (2000) and learn a pairwise ranking model.", "The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011) .", "For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers.", "We extract the following features for a paraphrase p: is its confidence score.", "The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over.", "During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking.", "It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025.", "The values for k and the threshold were tuned on the training set.", "Evaluation Settings.", "The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g.", "in derivations) yields a partial scoring.", "The overall score assigned to each system is calculated in two different ways.", "The 'isomorphic' setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order.", "The 'non-isomorphic' setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order.", "Baselines.", "We compare our method with the published results from the SemEval task.", "The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order.", "It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic.", "The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus.", "The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones.", "SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision.", "As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2.", "This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting.", "Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13 Table 3 : Categories of false positive and false negative predictions along with their percentage.", "Results.", "Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.", "Our method outperforms all the methods in the isomorphic setting.", "In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision.", "The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance.", "The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting.", "Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds.", "For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases \"system of welfare benefits\", \"system to provide welfare\" and others.", "Error Analysis.", "We analyze the causes of the false positive and false negative errors made by the model.", "For each error type we sample 10 nouncompounds.", "For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model.", "Table 3 displays the manu-ally annotated categories for each error type.", "Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, \"discussion by group\").", "Some are borderline valid with minor grammatical changes (error 6, \"force of coalition forces\") or too specific (error 2, \"life of women in community\" instead of \"life in community\").", "Common prepositional paraphrases were often retrieved although they are incorrect (error 3).", "We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g.", "a sentence such as \"rinse away the oil from baby 's head\" produces the n-gram \"oil from baby\".", "With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, \"holding done in the case of a share\").", "Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, \"mutation of a gene\").", "Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, \"holding of shares\" instead of \"holding of share\").", "Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w 1 w 2 to the relation that holds between w 1 and w 2 .", "Potentially, the corpus co-occurrences of w 1 and w 2 may contribute to the classification, e.g.", "'[w 2 ] held at [w 1 ]' indicates a TIME relation.", "Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases.", "Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space.", "Model.", "We generate a paraphrase vector representation par(w 1 w 2 ) for a given noun-compound w 1 w 2 as follows.", "We predict the indices of the k most likely paraphrases:p 1 , ...,p k = argmax kp , wherep is the distribution on the paraphrase vocabulary V p , as defined in Equation 1.", "We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores inp: par(w 1 w 2 ) = k i=1pp i · V pp i k i=1pp i (3) We train a linear classifier, and represent w 1 w 2 in a feature vector f (w 1 w 2 ) in two variants: paraphrase: f (w 1 w 2 ) = par(w 1 w 2 ), or integrated: concatenated to the constituent word embeddings f (w 1 w 2 ) = [ par(w 1 w 2 ), w 1 , w 2 ].", "The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set.", "We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f (w 1 w 2 ) = [ w 1 , w 2 ] (distributional).", "Datasets.", "We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse) .", "We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015) , on a lexical split in which the sets consist of distinct vocabularies.", "The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g.", "inferring the relation in pear tart based on apple cake and other similar compounds.", "We follow the random and full-lexical splits from Shwartz and Waterson (2018) .", "Baselines.", "We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010) : we reimplement a version of the classifier with features from WordNet and Roget's Thesaurus.", "2) Compositional (Dima, 2016) : a neural architecture that operates on the distributional representations of the noun-compound and its constituents.", "Noun-compound representations are learned with (Socher et al., 2012) models.", "We report the results from Shwartz and Waterson (2018) .", "3) Paraphrase-based (Shwartz and Waterson, 2018) : a neural classification model that learns an LSTM-based representation of the joint occurrences of w 1 and w 2 in a corpus (i.e.", "observed paraphrases), and integrates distributional information using the constituent embeddings.", "Results.", "Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", "The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods.", "The contribution of the paraphrase component is especially noticeable in the lexical splits.", "As expected, the integrated method in Shwartz and Waterson (2018) , in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model.", "The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification.", "Analysis.", "To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split.", "Examination of the per-relation F 1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F 1 points), OBJECTIVE (+5.5), AT-TRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5).", "Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model.", "For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation.", "Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them.", "In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset ( §5.2).", "We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with '[w 2 ] is unrelated to [w 1 ]'.", "Here, we assess whether our model succeeds to recognize non-compositional noun-compounds.", "We used the compositionality dataset of Reddy et al.", "(2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional.", "For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors.", "The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one.", "For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys.", "In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting \"wedding made of diamond\".", "Finally, the \"unrelated\" paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher).", "We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al.", "(2011) .", "Conclusion We presented a new semi-supervised model for noun-compound paraphrasing.", "The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent.", "This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks.", "In the future, we plan to take generalization one step further, and explore the possibility to use the biL-STM for generating completely new paraphrase templates unseen during training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Noun-compound Classification", "Noun-compound Paraphrasing", "Noun-compounds in other Tasks", "Paraphrasing Model", "Multi-task Reformulation", "Training Data", "Model", "Qualitative Analysis", "Evaluation: Noun-Compound Interpretation Tasks", "Paraphrasing", "Classification", "Compositionality Analysis", "Conclusion" ] }
GEM-SciDuet-train-128#paper-1349#slide-20
Recap
A model for generating paraphrases for given noun-compounds Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Generalize for unseen noun-compounds Embed semantically-similar paraphrases in proximity Improved performance in challenging evaluation settings
A model for generating paraphrases for given noun-compounds Vered Shwartz and Ido Dagan Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations ACL 2018 Generalize for unseen noun-compounds Embed semantically-similar paraphrases in proximity Improved performance in challenging evaluation settings
[]
GEM-SciDuet-train-129#paper-1351#slide-0
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-0
Why document level machine translation
Most MT models translate sentences independently Discourse phenomena are ignored, e.g. pronominal anaphora and lexical consistency which may have long range dependency Statistical MT attempts to document MT do not yield significant empirical improvements Previous context-NMT models only use local context and report deteriorated performance when using the target-side context We incorporate global source and target document contexts
Most MT models translate sentences independently Discourse phenomena are ignored, e.g. pronominal anaphora and lexical consistency which may have long range dependency Statistical MT attempts to document MT do not yield significant empirical improvements Previous context-NMT models only use local context and report deteriorated performance when using the target-side context We incorporate global source and target document contexts
[]
GEM-SciDuet-train-129#paper-1351#slide-1
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-1
Document MT as Structured Prediction
a S Beh Omi (0 2 Use JL 2 grey wo Sy Ln Sy al 0 Pan ne Neural N a S Sehg Oe - Use DLs 2 rgd og J (x iw Slag od (| a Us Jl 2 prgdeg ( x2 cep ty ny ad a S Sah oi ( eat Saag ay al (8 x3 Two types of factors: f(yt xt xt), g(yt yt) PM neo CCM Le ia arg max P(yt |xt yt xt) where f and g are subsumed in the P(yt |xt yt xt) Challenge: During test time, the target document is not given Coordinate Ascent (i.e., Iterative Decoding)
a S Beh Omi (0 2 Use JL 2 grey wo Sy Ln Sy al 0 Pan ne Neural N a S Sehg Oe - Use DLs 2 rgd og J (x iw Slag od (| a Us Jl 2 prgdeg ( x2 cep ty ny ad a S Sah oi ( eat Saag ay al (8 x3 Two types of factors: f(yt xt xt), g(yt yt) PM neo CCM Le ia arg max P(yt |xt yt xt) where f and g are subsumed in the P(yt |xt yt xt) Challenge: During test time, the target document is not given Coordinate Ascent (i.e., Iterative Decoding)
[]
GEM-SciDuet-train-129#paper-1351#slide-2
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-2
Document NMT with MemNets
P(yt |xt yt xt) Preven CC Ee Ester pectic an imum ea cs gimonda fulfils the objectives of the lisbon strategy ttt tt tot ft SF RRR OR RPP tT Ff fF FF gimonda taidab lissaboni strateegia eesmarke yt,j softmax(Wy rt,j Wym csrct Wyt ctrg t by Use only source, target, or both external memories Use Memory-to-Context/Memory-to-Output architectures for incorporating the different contexts
P(yt |xt yt xt) Preven CC Ee Ester pectic an imum ea cs gimonda fulfils the objectives of the lisbon strategy ttt tt tot ft SF RRR OR RPP tT Ff fF FF gimonda taidab lissaboni strateegia eesmarke yt,j softmax(Wy rt,j Wym csrct Wyt ctrg t by Use only source, target, or both external memories Use Memory-to-Context/Memory-to-Output architectures for incorporating the different contexts
[]
GEM-SciDuet-train-129#paper-1351#slide-3
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-3
Experimental Setup
corpus #docs (H) #sents (K) avg doc len Evaluation Metrics: BLEU, METEOR Local source context baselines:
corpus #docs (H) #sents (K) avg doc len Evaluation Metrics: BLEU, METEOR Local source context baselines:
[]
GEM-SciDuet-train-129#paper-1351#slide-4
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-4
Memory to Context Results
PM neo CCM Le ia periments and Analysis S-NMT S-NMT+src S-NMT+trg S-NMT+both
PM neo CCM Le ia periments and Analysis S-NMT S-NMT+src S-NMT+trg S-NMT+both
[]
GEM-SciDuet-train-129#paper-1351#slide-5
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-5
Memory to Output Results
Preven CC Ee Ester S-NMT S-NMT+src S-NMT+trg S-NMT+both
Preven CC Ee Ester S-NMT S-NMT+src S-NMT+trg S-NMT+both
[]
GEM-SciDuet-train-129#paper-1351#slide-7
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-7
Example translation
Target qimonda taidab lissaboni strateegia eesmarke. qimonda meets the objectives of the lisbon strategy. <UNK> is the objectives of the lisbon strategy. the millennium development goals are fulfilling the millennium goals of the lisbon strategy. in writing. - (ro) the lisbon strategy is fulfilling the objectives of the lisbon strategy. qimonda fulfils the aims of the lisbon strategy. [Wang et al., 2017] <UNK> fulfils the objectives of the lisbon strategy.
Target qimonda taidab lissaboni strateegia eesmarke. qimonda meets the objectives of the lisbon strategy. <UNK> is the objectives of the lisbon strategy. the millennium development goals are fulfilling the millennium goals of the lisbon strategy. in writing. - (ro) the lisbon strategy is fulfilling the objectives of the lisbon strategy. qimonda fulfils the aims of the lisbon strategy. [Wang et al., 2017] <UNK> fulfils the objectives of the lisbon strategy.
[]
GEM-SciDuet-train-129#paper-1351#slide-8
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-8
Example translation contd
... et riigis kehtib endiselt lukasenka diktatuur, mis rikub inim- ning etnilise vahemuse oigusi. ... this country is still under the dictatorship of lukashenko, breaching human rights and the rights of ethnic minorities. ... the country still remains in a position of lukashenko to violate human rights and ethnic minorities. ... the country still applies to the brutal dictatorship of human and ethnic minority rights. ... the country still keeps the <UNK> dictatorship that violates human rights and ethnic rights. ... the country still persists in lukashenkos dictatorship that violate human rights and ethnic minority rights. [Wang et al., 2017] ... there is still a regime in the country that is violating the rights of human and ethnic minority in the country.
... et riigis kehtib endiselt lukasenka diktatuur, mis rikub inim- ning etnilise vahemuse oigusi. ... this country is still under the dictatorship of lukashenko, breaching human rights and the rights of ethnic minorities. ... the country still remains in a position of lukashenko to violate human rights and ethnic minorities. ... the country still applies to the brutal dictatorship of human and ethnic minority rights. ... the country still keeps the <UNK> dictatorship that violates human rights and ethnic rights. ... the country still persists in lukashenkos dictatorship that violate human rights and ethnic minority rights. [Wang et al., 2017] ... there is still a regime in the country that is violating the rights of human and ethnic minority in the country.
[]
GEM-SciDuet-train-129#paper-1351#slide-9
1351
Document Context Neural Machine Translation with Memory Networks
We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015) .", "It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering.", "Despite their flexibility, most neural MT models translate sentences independently.", "Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017) .", "There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps.", "Hardmeier and Federico (2010) ; Gong et al.", "(2011) ; Garcia et al.", "(2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements.", "More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017) ; however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts.", "The latter two report deteriorated performance when using the target-side context.", "In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015) .", "We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated.", "We conduct experiments on three language pairs: French-English, German-English and Estonian-English.", "The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR.", "Background Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation.", "Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: − → hi = −−→ RNN( − → h i−1, ES[xi]), ← − h i = ←−− RNN( ← − h i+1, ES[xi]) where E S [x i ] is embedding of the word x i from the embedding table E S of the source language, and − → h i and ← − h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units.", "Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ − → h i ; ← − h i ].", "Decoder The generation of each word y j is conditioned on all of the previously generated words y <j via the state of the RNN decoder s j , and the source sentence via a dynamic context vector c j : yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where E T [y j ] is embedding of the word y j from the embedding table E T of the target language, and W matrices and b r vector are the parameters.", "The dynamic context vector c j is computed via c j = i α ji h i , where α j = softmax(a j ) a ji = v · tanh(W ae · h i + W at · s j−1 ) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word.", "Memory Networks (MemNets) Memory Networks are a class of neural models that use external memories to perform inference based on long-range dependencies.", "A memory is a collection of vectors M = {m 1 , .., m K } constituting the memory cells, where each cell m k may potentially correspond to a discrete object x k .", "The memory is equipped with a read and optionally a write operation.", "Given a query vector q, the output vector generated by reading from the memory is |M | i=1 p i m i , where p i represents the relevance of the query to the i-th memory cell p = Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem.", "Given a set of sentences {x 1 , .", ".", ".", ", x |d| } in a source document d, we are interested in generating the collection of their translations {y 1 , .", ".", ".", ", y |d| } taking into account interdependencies among them imposed by the document.", "We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document.", "Our model has two types of factors: • f θ (y t ; x t , x −t ) to capture the interdependencies between the translation y t , the corresponding source sentence x t and all the other sentences in the source document x −t , and • g θ (y t ; y −t ) to capture the interdependencies between the translation y t and all the other translations in the document y −t .", "Hence, the probability of a document translation given the source document is P (y 1 , .", ".", ".", ", y |d| |x 1 , .", ".", ".", ", x |d| ) ∝ exp t f θ (y t ; x t , x −t ) + g θ (y t ; y −t ) .", "The factors f θ and g θ are realised by neural architectures whose parameters are collectively denoted by θ.", "Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard.", "This is due to the enormity of factors g θ (y t ; y −t ) over a large number of translation variables y t 's (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language).", "Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ d∈D |d| t=1 P θ (y t |x t , y −t , x −t ) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(x t , y t )} |d| t=1 .", "We directly model the document-conditioned NMT model P θ (y t |x t , y −t , x −t ) using a neural architecture which subsumes both the f θ and g θ factors (covered in the next section).", "Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y 1 ,...,y |d| |d| t=1 P θ (y t |x t , y −t , x −t ) which is hard (due to similar reasons as mentioned earlier).", "We hence resort to a block coordinate descent optimisation algorithm.", "More specifically, we initialise the translation of each sentence using the base neural MT model P (y t |x t ).", "We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P (y t |x t , y −t , x −t ) while the translations of other sentences are kept fixed.", "Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2 .", "Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model.", "However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations.", "That is, the generation process is as follows: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, y−t, x−t) (2) where y t,j is the j-th word of the t-th target sentence, y t,<j are the previously generated words, and x −t and y −t are as introduced previously.", "Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence.", "Let M [x −t ] and M [y −t ] denote external memories representing the source and target document context, respectively.", "These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly).", "Let h t and s t be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively.", "We make use of h t as the query to get the relevant context from the source external memory: c src t = MemNet(M [x −t ], h t ) Furthermore, for the t-th sentence, we get the relevant information from the target context: c trg t = MemNet(M [y −t ], s t + W at · h t ) where the query consists of the representation of the translation s t from the decoder endowed with that of the source sentence h t from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and W at projects the source representation into the hidden state space.", "Now that we have representations of the relevant source and target document contexts, Eq.", "2 can be re-written as: P θ (yt|xt, y−t, x−t) = |y t | j=1 P θ (yt,j|yt,<j, xt, c trg t , c src t ) (3) More specifically, the memory contexts c src t and c trg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: • Memory-to-Output in which the memory contexts are incorporated in the output layer: s t,j = tanh(W s · s t,j−1 + W sj · E T [y t,j ] + W sc · c t,j + W sm · c src t + W st · c trg t ) y t,j ∼ softmax(W y · r t,j + W ym · c src t + W yt · c trg t + b r ) where W sm , W st , W ym , and W yt are the new parameter matrices.", "We use only the source, only the target, or both external memories as the additional conditioning contexts.", "Furthermore, we use either the Memory-to-Context or Memory-to-Output architectures for incorporating the document contexts.", "In the experiments, we will explore these different options to investigate the most effective combination.", "We now turn our attention to the construction of the external memories for the source and target sides of a document.", "The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document.", "More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs).", "We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences' information across the document.", "We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory.", "The source external memory is built once for each minibatch, and does not change throughout the document translation.", "To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective.", "However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective.", "The Target Memory The memory cells of the target external memory represent the current translations of the document.", "Recall from the previous section that we use coordinate descent iteratively to update these translations.", "Let {y 1 , .", ".", ".", ", y |d| } be the current translations, and let {s |y 1 | , .", ".", ".", ", s |y |d| | } be the last states of the decoder when these translations were generated.", "We use these last decoder states as the cells of the external target memory.", "We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation.", "We will show in the experiments that our efficient target memory construction is indeed effective.", "Experiments and Analysis Datasets.", "We conducted experiments on three language pairs: French-English, German-English and Estonian-English.", "Table 1 shows the statistics of the datasets used in our experiments.", "The French-English dataset is based on the TED Talks corpus 1 (Cettolo et al., 2012) where each talk is considered a document.", "The Estonian-English data comes from the Europarl v7 corpus 2 (Koehn, 2005) .", "Following Smith et al.", "(2013) , we split the speeches based on the SPEAKER tag and treat them as documents.", "The French-English and Estonian-English corpora were randomly split into train/dev/test sets.", "For German-English, we use the News Commentary v9 corpus 3 for training, news-dev2009 for development, Table 1 : Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000).", "For De-En, we report statistics of the two test sets news-test2011 and news-test2016.", "and news-test2011 and news-test2016 as the test sets.", "The news-commentary corpus has document boundaries already provided.", "We pre-processed all corpora to remove very short documents and those with missing translations.", "Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al.", "(2016).", "4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations.", "We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines.", "Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017) , on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016) .", "For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively.", "The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder.", "GRUs are used instead of LSTMs to reduce the number of parameters in the main model.", "The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model.", "Training We use a stage-wise method to train the variants of our document context NMT model.", "Firstly, we pre-train the Memory-to-Context/Memory-to-Output models, setting their readings from the source and target memories to the zero vector.", "This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage).", "For the first stage, we make use of stochastic gradient descent (SGD) 5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs.", "The convergence occurs in 6-8 epochs.", "For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs 6 .", "The best model is picked based on the dev-set perplexity.", "To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model.", "For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5.", "Mini-batching is used in both stages to speed up training.", "For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings.", "When training the document NMT model in the second stage, we need the target memory.", "One option would be to use the ground truth translations for building the memory.", "However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory).", "Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model 7 .", "This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters.", "Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5 In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al.", "(2017) .", "6 For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al.", "(2016) .", "7 We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model.", "We tried multiple passes of decoding at test-time but it was not helpful.", "(iii) both the source and target memories (S-NMT+both mems).", "We compare these variants against the standard sentence-level NMT model (S-NMT).", "We also compare the source memory variants of our model to the local context-NMT models 8 of Jean et al.", "(2017) and Wang et al.", "(2017) , which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model).", "Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2 ).", "Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs.", "We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English.", "This led to −0.16 and −0.25 decrease 9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time.", "guage pairs.", "For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best.", "For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best.", "Overall, the Memory-to-Context model variants perform better than their Memory-to-Output counterparts.", "We attribute this to the large number of parameters in the latter architecture (Table 3 ) and limited amount of data.", "We further experiment with more data for train-BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al.", "(2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al.", "(2017) ing the sentence-based NMT to investigate the extent to which document context is useful in this setting.", "We randomly choose an additional 300K German-English sentence pairs from WMT'14 data to train the base NMT model in stage 1.", "In stage 2, we use the same document corpus as before to train the document-level models.", "As seen from Figure 3 , the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus.", "For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline.", "On the other hand, for the Memory-to-Output model, the target memory model's METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2).", "Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017) .", "For French→English, our source memory model is comparable to both baselines.", "For German→English, our S-NMT+src mem model is comparable to Jean et al.", "(2017) but outperforms Wang et al.", "(2017) for one test set according to BLEU, and for both test sets according to METEOR.", "For Estonian→English, our model outperforms Jean et al.", "(2017) .", "Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned).", "However, the other two context baselines have access to that information, yet our model's performance is either better or quite close to those models.", "We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level.", "From Table 5 , it can be seen that our model's performance is better than the baselines for majority of the cases.", "The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance.", "Memory-to-Output From Local Source Context Models Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets.", "We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model.", "From Table 6 , we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results.", "We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context.", "However, for German→English , the target memory model performs the best show- ing that for documents with richer context (e.g.", "news articles) we do need the global target document context to improve MT performance.", "Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7 .", "It can be seen that the source sentence has the noun \"Qimonda\" but the sentencelevel NMT model fails to attend to it when generating the translation.", "On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model's translation quality surpasses them.", "This is because the word \"Qimonda\" was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al.", "(2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences.", "We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora.", "By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7 .", "Here the topic of the sentence is \"the country under the dictatorship of Lukashenko\" and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word 'diktatuur', hence producing much better translation as compared to both baselines.", "Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs.", "Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation.", "Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those.", "Following Wang et al.", "(2017) , we also investigate the extent to which our model can correct errors made by the baseline system.", "We randomly choose five documents from the test set.", "Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements.", "Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns.", "Gong et al.", "(2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation.", "Garcia et al.", "(2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model.", "Docent is an SMT-based document-level decoder (Hardmeier et al., 2012 (Hardmeier et al., , 2013 , which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing.", "Garcia et al.", "(2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent.", "In another work, Garcia et al.", "(2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations.", "The proposed methods fail to yield improvements upon automatic evaluation.", "Larger Context Neural MT Jean et al.", "(2017) extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words.", "Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document.", "Wang et al.", "(2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state.", "Bawden et al.", "(2017) use multi-encoder NMT models to exploit context from the previous source and target sentence.", "They highlight the importance of targetside context but report deteriorated BLEU scores when using it.", "All these works consider a very local source/target context and completely ignore the global source and target document contexts.", "Conclusion We have proposed a document-level neural MT model that captures global source and target document context.", "Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides.", "We show statistically significant improvements of the translation quality on three language pairs.", "For future work, we intend to investigate models which incorporate specific discourse-level phenomena." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Neural Machine Translation (NMT)", "Memory Networks (MemNets)", "Document NMT as Structured Prediction", "Context Dependent NMT with MemNets", "Experiments and Analysis", "Main Results", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-129#paper-1351#slide-9
Conclusion
Proposed a model which incorporates the global source and target document contexts Proposed effective training and decoding methodologies for our model Investigate document-context NMT models which incorporate specific discourse-level phenomena
Proposed a model which incorporates the global source and target document contexts Proposed effective training and decoding methodologies for our model Investigate document-context NMT models which incorporate specific discourse-level phenomena
[]
GEM-SciDuet-train-130#paper-1353#slide-0
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-0
What are Fake News
Disinformation displayed as news articles Image: Claire Wardle, First Draft
Disinformation displayed as news articles Image: Claire Wardle, First Draft
[]
GEM-SciDuet-train-130#paper-1353#slide-1
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-1
The Political Spectrum
The left-right political spectrum is a system of classifying political positions, ideologies and parties. Left-wing politics and right-wing politics are often presented as opposed, although either may adopt stances from the other side. [Wikipedia] Alt-left Left Center Right Alt-right Hyperpartisan Partisan Partisan Hyperpartisan Partisan: someone with a psychological identification with one major party. [Wikipedia] News media reporting on politics can be aligned on this spectrum as well. We are observing an increasing number of hyperpartisan news publishers.
The left-right political spectrum is a system of classifying political positions, ideologies and parties. Left-wing politics and right-wing politics are often presented as opposed, although either may adopt stances from the other side. [Wikipedia] Alt-left Left Center Right Alt-right Hyperpartisan Partisan Partisan Hyperpartisan Partisan: someone with a psychological identification with one major party. [Wikipedia] News media reporting on politics can be aligned on this spectrum as well. We are observing an increasing number of hyperpartisan news publishers.
[]
GEM-SciDuet-train-130#paper-1353#slide-2
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-2
Fake News and Hyperpartisan News
How can it be that the alt left and the alt right cannot be distinguished from the mainstream, when both together (hyperpartisan news) can be? Alt-left Left Center Right Alt-right Hyperpartisan Partisan Partisan Hyperpartisan The horseshoe theory asserts that the alt left and the alt right, rather than being at opposite and opposing ends of a linear political continuum, in fact closely resemble one another, much like the ends of a horseshoe. [Wikipedia] @KieselJohannes
How can it be that the alt left and the alt right cannot be distinguished from the mainstream, when both together (hyperpartisan news) can be? Alt-left Left Center Right Alt-right Hyperpartisan Partisan Partisan Hyperpartisan The horseshoe theory asserts that the alt left and the alt right, rather than being at opposite and opposing ends of a linear political continuum, in fact closely resemble one another, much like the ends of a horseshoe. [Wikipedia] @KieselJohannes
[]
GEM-SciDuet-train-130#paper-1353#slide-3
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-3
Why are Fake News Published by Hyperpartisan Pages
Image: Claire Wardle, First Draft
Image: Claire Wardle, First Draft
[]
GEM-SciDuet-train-130#paper-1353#slide-4
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-4
Fake News Detection Taxonomy of Approaches
Requires political knowledge base q Unavailable ahead of time q We cannot trust the web Knowledge-based (also called fact checking) Semantic web / LOD q Limited to social media platforms q Part of damage already done q Allows for pre-posting check q Real-time reaction possible q Hard to mask q But are style differences sufficient?
Requires political knowledge base q Unavailable ahead of time q We cannot trust the web Knowledge-based (also called fact checking) Semantic web / LOD q Limited to social media platforms q Part of damage already done q Allows for pre-posting check q Real-time reaction possible q Hard to mask q But are style differences sufficient?
[]
GEM-SciDuet-train-130#paper-1353#slide-5
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-5
Fake News and Hyperpartisan News Corpus Construction
true mix false n/a Annotations provided by journalists at BuzzFeed @KieselJohannes
true mix false n/a Annotations provided by journalists at BuzzFeed @KieselJohannes
[]
GEM-SciDuet-train-130#paper-1353#slide-6
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-6
Fake News and Hyperpartisan News Selected Results
true mix false n/a Politico Fake News Detection Annotations provided by journalists at BuzzFeed @KieselJohannes Occupy De mocrats Recall Recall
true mix false n/a Politico Fake News Detection Annotations provided by journalists at BuzzFeed @KieselJohannes Occupy De mocrats Recall Recall
[]
GEM-SciDuet-train-130#paper-1353#slide-7
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-7
Horseshoe Validation Experiment I Leave out Classification
q Classifier is trained to distinguish left-wing and center articles q Right-wing articles are used for testing q Majority of right-wing articles are classified as left-wing rather than center
q Classifier is trained to distinguish left-wing and center articles q Right-wing articles are used for testing q Majority of right-wing articles are classified as left-wing rather than center
[]
GEM-SciDuet-train-130#paper-1353#slide-8
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-8
Horseshoe Validation Experiment II Unmasking Koppel Schler 2004
A B a a a a a b b b b b Typical learning characteristic for . . . different authors (A B) Decision: "same" same author (A B) The typical learning characteristic can be learned. U Meta Learning We apply Unmasking to distinguish style genres. Nomralized accuracy 0.4 0.2 mainstream vs left mainstream vs right left vs right
A B a a a a a b b b b b Typical learning characteristic for . . . different authors (A B) Decision: "same" same author (A B) The typical learning characteristic can be learned. U Meta Learning We apply Unmasking to distinguish style genres. Nomralized accuracy 0.4 0.2 mainstream vs left mainstream vs right left vs right
[]
GEM-SciDuet-train-130#paper-1353#slide-9
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-9
Summary and Outlook
Hyperpartisan news pages produce relatively many fake news articles q Hyperpartisan news can be distinguished quiet well based on style q Style-based detection allows for real-time detection U Political extremism in news can be ousted or at least flagged q The style of alt left and alt right news is very similar q Linguistic evidence for the horseshoe theory of the political spectrum? U Large-scale analysis required
Hyperpartisan news pages produce relatively many fake news articles q Hyperpartisan news can be distinguished quiet well based on style q Style-based detection allows for real-time detection U Political extremism in news can be ousted or at least flagged q The style of alt left and alt right news is very similar q Linguistic evidence for the horseshoe theory of the political spectrum? U Large-scale analysis required
[]
GEM-SciDuet-train-130#paper-1353#slide-10
1353
A Stylometric Inquiry into Hyperpartisan and Fake News
We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F 1 = 0.78), and satire from both (F 1 = 0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F 1 = 0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232 ], "paper_content_text": [ "Introduction The media and the public are currently discussing the recent phenomenon of \"fake news\" and its potential role in swaying elections, how it may affect society, and what can and should be done about it.", "Prone to misunderstanding and misue, the term \"fake news\" arose from the observation that, in social media, a certain kind of 'news' spreads much more successfully than others, and this kind of 'news' is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths.", "Although traditional yellow press has been spreading 'news' of varying de-grees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause.", "The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line.", "For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake-a fundamental right of every free society.", "Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition.", "While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy.", "At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous.", "Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough.", "We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship.", "In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way.", "After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments.", "Related Work Approaches to fake news detection divide into three categories (Figure 1 ): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style).", "Knowledge-based fake news detection.", "Methods from information retrieval have been proposed early on to determine the veracity of web documents.", "For example, Etzioni et al.", "(2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question.", "Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim.", "Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015) .", "Other approaches rely on knowledge bases, including the semantic web and linked open data.", "Wu et al.", "(2014) \"perturb\" a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim.", "Ciampaglia et al.", "(2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm.", "However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016) .", "Context-based fake news detection.", "Here, fake news items are identified via meta information and spread patterns.", "For example, Long et al.", "(2017) show that author information can be a useful feature for fake news detection, and Derczynski et al.", "(2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks.", "The Facebook analysis of Mocanu et al.", "(2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former.", "Similarly, Acemoglu et al.", "(2010) , Kwon et al.", "(2013) , Ma et al.", "(2017) , and model the spread of (mis-)information, while Budak et al.", "(2011) and Nguyen et al.", "(2012) propose algorithms to limit its spread.", "The efficacy of countermeasures like debunking sites is studied by Tambuscio et al.", "(2015) .", "While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content.", "Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Budak et al., 2011 Nguyen et al.", "2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work.", "Style-based fake news detection.", "Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis-a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967) .", "The hypothesis led to the development of forensic tools to assess testimonies at the statement level.", "Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example and .", "In this regard, use rhetorical structure theory as a measure of story coherence and as an indicator for fake news.", "Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements.", "A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017) .", "Most prominently, stance detection was the task of the Fake News Challenge 1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach.", "Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al.", "(1998) assesses entire texts.", "Common applications are author profiling (age, gender, etc.)", "and genre classification.", "Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al.", "(2012) ).", "As an early precursor to fake news detection, Badaskar et al.", "(2008) train models to identify news items that were automatically generated.", "Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al.", "(2016) , ).", "Rashkin et al.", "(2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news.", "We make use of their results by incorporating the bestperforming style features identified.", "Finally, two preprint papers have been recently shared.", "Horne and Adali (2017) use style features for fake news detection.", "However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles.", "We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data.", "Similarly, Pérez-Rosas et al.", "(2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real.", "The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web.", "For style analysis, the former dataset may not be suitable, since the authors note themselves that \"workers succeeded in mimicking the reporting style from the original news\".", "The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias.", "Their feature selection follows that of Rubin et al.", "(2016) , which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable.", "The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.", "2 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27.", "Table 1 gives an overview.", "Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones.", "All publishers earned Facebook's blue checkmark , indicating authenticity and an elevated status within the network.", "Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties.", "Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al.", "(2016) reported key insights as a data journalism article.", "The annotations were published alongside the article.", "3 However, this data only comprises URLs to the original Facebook posts.", "To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability.", "Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing.", "Manual fact-checking.", "A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless.", "Therefore, posts were rated \"mostly true,\" \"mixture of true and false,\" \"mostly false,\" or, if the post was opinion-driven or otherwise lacked a factual claim, \"no factual content.\"", "Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin.", "The ratings \"mixture of true and false\" and \"mostly false\" had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one.", "Finally, all news rated \"mostly false\" underwent a final check to ensure the rating was justified, lest the respective publishers would contest it.", "The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately.", "The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up.", "This rating does not allow for unsupported speculation or claims.", "Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not.", "This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate.", "It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link.", "Finally, use this rating for news articles that are based on unconfirmed information.", "Mostly false: Most or all of the information in the post or in the link being shared is inaccurate.", "This should also be used when the central claim being made is false.", "No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim.", "This is also the category to use for posts that are of the \"Like this if you think...\" variety.", "Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers.", "Annotations were recorded at the article level, not at statement level.", "For text categorization, this is sufficient.", "At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists.", "Table 1 shows the fact-checking results and some key statistics per article.", "Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false.", "Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false.", "Publisher \"The Other 98%\" sticks out by achieving an almost per- fect score.", "By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72).", "Here, publisher \"Right Wing News\" sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed.", "Corpus Statistics Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico.", "Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively.", "Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more.", "When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external.", "Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones.", "Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false.", "Arguably, the latter may not be exactly what is deemed \"fake news\" (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth.", "More often, true facts are misconstrued or framed badly.", "In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false-except for satire-fake news, and disregard all articles rated non-factual.", "Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al.", "(2007) , which we employ for the first time to distinguish genre styles as opposed to author styles.", "For sake of reproducibility, all our code has been published.", "4 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain.", "The former are n-grams, n in [1, 3] , of characters, stop words, and parts-of-speech.", "Further, we employ 10 readability scores 5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966) .", "The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length.", "In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents.", "Discarding these features prevents overfitting and improves the chances that our model will generalize.", "If not stated otherwise, our experiments share a common setup.", "In order to avoid biases from the respective training sets, we balance them using oversampling.", "Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher's style.", "For non-Unmasking experiments we use WEKA's random forest implementation with default settings.", "Unmasking Genre Styles Unmasking, as proposed by Koppel et al.", "(2007) , is a meta learning approach for authorship verification.", "We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news.", "This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation.", "Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author.", "Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope.", "A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors.", "Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier-at least for long documents that can be split up into a sufficient number of chunks.", "It turns out that what applies to the style of authors also applies to genre styles.", "We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input.", "When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author.", "Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions.", "Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F 1 .", "We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake.", "Also, the tasks we are tackling are new, so that little is known to date about user preferences.", "This is also why we chose the evenly-balanced F 1 .", "Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1.", "Can (left/right) hyperpartisanship be distinguished from the mainstream?", "2.", "Is style-based fake news detection feasible?", "3.", "Can fake news be distinguished from satire?", "Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream.", "To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream.", "To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one.", "Hyperpartisanship vs.", "Mainstream A.", "Predicting orientation.", "Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream-an obvious first experiment for our dataset.", "Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag.", "The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations.", "When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles.", "Misclassified mainstream articles spread almost evenly across the other classes.", "The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks.", "This holds for all subsequent experiments.", "B.", "Predicting hyperpartisanship.", "Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work.", "Rather, we trained a binary classifier to discriminate hyperpartisanship in general from the mainstream.", "Table 3 shows the performance values.", "This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline.", "Comparing Table 2 and Table 3 , we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation?", "Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas.", "Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation.", "Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification.", "If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa?", "Table 4 shows the results of this experiment.", "As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas- approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity.", "We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles.", "Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves).", "The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation.", "In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs.", "As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others.", "This result hence matches the findings of the previous experiments.", "With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream.", "Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article's style can be assessed immediately without referring to external information.", "Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3) .", "Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire.", "A.", "Predicting veracity.", "When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection.", "A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task.", "We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier.", "Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.", "Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall.", "While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the F -Measure.", "We conclude that style-based fake news classification simply does not work in general.", "B.", "Predicting satire.", "Yet, not all fake news are the same.", "One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers.", "Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes.", "Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al.", "(2016) (the S-n-L News DB corpus), distinguishing satire from real news.", "This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class).", "As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81.", "It clearly improves over topic classification, but does not outperform Rubin et al.", "'s classifier, which includes features based on topic, absurdity, grammar, and punctuation.", "We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news.", "A classifier with topic features therefore does not generalize.", "Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it.", "C. Unmasking satire.", "Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture.", "We assess the style similarity of satire from Rubin et al.", "'s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above.", "Figure 3 shows the resulting Un-masking curves.", "The curve for the pair of fake vs. real news drops faster compared to the other two pairs.", "Apparently, the style of fake news has more in common with that of real news than either of the two have with satire.", "These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy.", "Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers.", "These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered.", "Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style.", "We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone.", "Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism.", "We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology.", "All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach.", "Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "The BuzzFeed-Webis Fake News Corpus", "Corpus Construction", "Limitations", "Corpus Statistics", "Operationalizing Fake News", "Methodology", "Style Features and Feature Selection", "Unmasking Genre Styles", "Baselines", "Performance Measures", "Experiments", "Hyperpartisanship vs. Mainstream", "Fake vs. Real (vs. Satire)", "Conclusion" ] }
GEM-SciDuet-train-130#paper-1353#slide-10
Style Model
n-grams with n of characters, stop words, parts-of-speech q 10 readability scores q Dictionary features based on General Inquirer q Ratios of quoted words, external links, number of paragraphs, and their q Discard word features (n-gram features) occurring in less than 2.5% (10%) of q Balancing using oversampling q Publishers are not represented in both training and test set q WEKAs random forest with default parameters @KieselJohannes
n-grams with n of characters, stop words, parts-of-speech q 10 readability scores q Dictionary features based on General Inquirer q Ratios of quoted words, external links, number of paragraphs, and their q Discard word features (n-gram features) occurring in less than 2.5% (10%) of q Balancing using oversampling q Publishers are not represented in both training and test set q WEKAs random forest with default parameters @KieselJohannes
[]
GEM-SciDuet-train-131#paper-1354#slide-1
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-1
Motivation
Argumentation is crucial in communication. We want to avoid biased perception and uninformed decisions. Being informative is already non-trivial, not to mention being persuasive.
Argumentation is crucial in communication. We want to avoid biased perception and uninformed decisions. Being informative is already non-trivial, not to mention being persuasive.
[]
GEM-SciDuet-train-131#paper-1354#slide-2
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-2
Research Question
How can we automate human argumentation process?
How can we automate human argumentation process?
[]
GEM-SciDuet-train-131#paper-1354#slide-3
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-3
Our Goal
We generate a specific type of argument: counterargument. Input: a statement of belief on some controversial topic Output: a counterargument refuting the statement Input: Humans are not designed to be vegan. Output: We are not designed to be anything, evolution is directionless. You imply unnatural is bad, that is wrong. Driving and using smartphone are also unnatural. 1. Understanding the topic and stance 2. Application of common sense knowledge 3. Generating arguments in natural language texts
We generate a specific type of argument: counterargument. Input: a statement of belief on some controversial topic Output: a counterargument refuting the statement Input: Humans are not designed to be vegan. Output: We are not designed to be anything, evolution is directionless. You imply unnatural is bad, that is wrong. Driving and using smartphone are also unnatural. 1. Understanding the topic and stance 2. Application of common sense knowledge 3. Generating arguments in natural language texts
[]
GEM-SciDuet-train-131#paper-1354#slide-4
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-4
Prior Work
Evidence detection [Rinott et al, 2015] Classification of types of supports [Hua and Wang, 2017] Argument and Evidence Retrieval Argument search engine [Wachsmuth et al, 2017; Stab et al, 2018] Retrieval based argument generation [Sato et al, 2015] Argument strategy based generation [Zukerman et al, 2000]
Evidence detection [Rinott et al, 2015] Classification of types of supports [Hua and Wang, 2017] Argument and Evidence Retrieval Argument search engine [Wachsmuth et al, 2017; Stab et al, 2018] Retrieval based argument generation [Sato et al, 2015] Argument strategy based generation [Zukerman et al, 2000]
[]
GEM-SciDuet-train-131#paper-1354#slide-5
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-5
Data
A subreddit for open discussion and debate I believe the government should be allowed to view my emails for national security concerns. CMV. I have nothing to hide. I dont break the law, I dont write hate e-mails [U1] Seriously, whether or not is a good thing, it runs up against the protections offered in the Fourth Amendment: [--quote--] [U2] Giving up privacy means giving up some of your right to free speech. Knowing that you might be listened in on may change what you say and how you say it I saved this answer for a Reddit Gold. It did change my opinion - I never thought that We selected the politics and policy related topics for study. We only consider high quality replies (with delta or more upvotes). Statistics as below after removing non-root and low quality replies. Input statement Human argument Avg number of sentences Avg number of tokens
A subreddit for open discussion and debate I believe the government should be allowed to view my emails for national security concerns. CMV. I have nothing to hide. I dont break the law, I dont write hate e-mails [U1] Seriously, whether or not is a good thing, it runs up against the protections offered in the Fourth Amendment: [--quote--] [U2] Giving up privacy means giving up some of your right to free speech. Knowing that you might be listened in on may change what you say and how you say it I saved this answer for a Reddit Gold. It did change my opinion - I never thought that We selected the politics and policy related topics for study. We only consider high quality replies (with delta or more upvotes). Statistics as below after removing non-root and low quality replies. Input statement Human argument Avg number of sentences Avg number of tokens
[]
GEM-SciDuet-train-131#paper-1354#slide-6
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-6
Pipeline
<phz> right to privacy<phz> I believe the <evd> edward snowden <arg> you are ignoring the Input statement Evidence sentences I believe the government should be allowed to view my emails for national security concerns. CMV. 1. Edward Snowden: Arguing that you dont care about right to privacy because. I have nothing to hide. I dont break the law 2. Political corruption is the use of powers by government officials for illegitimate private gain. 5. Argument Decoding (LSTM)
<phz> right to privacy<phz> I believe the <evd> edward snowden <arg> you are ignoring the Input statement Evidence sentences I believe the government should be allowed to view my emails for national security concerns. CMV. 1. Edward Snowden: Arguing that you dont care about right to privacy because. I have nothing to hide. I dont break the law 2. Political corruption is the use of powers by government officials for illegitimate private gain. 5. Argument Decoding (LSTM)
[]
GEM-SciDuet-train-131#paper-1354#slide-7
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-7
Step 1 Document Retrieval
Goal: to extract relevant evidence for counterarguments Formed from topic signatures [Lin and Hovy, 2000] Representative of the text, measured by log-likelihood ratio E.g. government, emails, national security, etc in the following post I believe the government should be allowed to view my emails for national security concerns. CMV. I have nothing to hide. I dont break the law
Goal: to extract relevant evidence for counterarguments Formed from topic signatures [Lin and Hovy, 2000] Representative of the text, measured by log-likelihood ratio E.g. government, emails, national security, etc in the following post I believe the government should be allowed to view my emails for national security concerns. CMV. I have nothing to hide. I dont break the law
[]
GEM-SciDuet-train-131#paper-1354#slide-8
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-8
Step 2 Sentence Reranking
Returned articles are broken into paragraphs and sentences. Sentences are ranked by TF-IDF similarity against the post. 1. Edward Snowden: Arguing that you dont care about right to privacy because. 2. Political corruption is the use of powers by government officials for illegitimate private gain.
Returned articles are broken into paragraphs and sentences. Sentences are ranked by TF-IDF similarity against the post. 1. Edward Snowden: Arguing that you dont care about right to privacy because. 2. Political corruption is the use of powers by government officials for illegitimate private gain.
[]
GEM-SciDuet-train-131#paper-1354#slide-9
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-9
Step 3 Encoding
Encode input statement and evidence sentences, separated by <evd> token I believe the <evd> edward snowden Input statement Evidence sentences
Encode input statement and evidence sentences, separated by <evd> token I believe the <evd> edward snowden Input statement Evidence sentences
[]
GEM-SciDuet-train-131#paper-1354#slide-10
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-10
Step 4 Keyphrase Decoding
Generate keyphrase as an intermediate step Aim to inform the model of the major talking points Mimic keyphrases that are likely reused by human I believe the <evd> edward snowden <phz> right to privacy<phz> We extract noun phrases and verb phrases. The length has to be between 2 to 10 tokens. Phrase has to contain non-stop words. Numerous civil rights groups and privacy groups oppose surveillance as a violation of people's right to privacy.
Generate keyphrase as an intermediate step Aim to inform the model of the major talking points Mimic keyphrases that are likely reused by human I believe the <evd> edward snowden <phz> right to privacy<phz> We extract noun phrases and verb phrases. The length has to be between 2 to 10 tokens. Phrase has to contain non-stop words. Numerous civil rights groups and privacy groups oppose surveillance as a violation of people's right to privacy.
[]
GEM-SciDuet-train-131#paper-1354#slide-11
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-11
Step 5 Argument Decoding
Generate argument based on encoder or keyphrase last hidden state Attention mechanism over both input and keyphrase results <phz> right to privacy<phz> I believe the <evd> edward snowden <arg> you are ignoring the
Generate argument based on encoder or keyphrase last hidden state Attention mechanism over both input and keyphrase results <phz> right to privacy<phz> I believe the <evd> edward snowden <arg> you are ignoring the
[]
GEM-SciDuet-train-131#paper-1354#slide-12
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-12
Experiments
Initialize first layers of encoders and argument decoders Warm up the system with a good argumentation language model All training data + non-politics threads + non-root replies Sequence-to-sequence without evidence sentences or keyphrases System vs. Oracle retrieval In reality, during test time evidence can only be obtained by input statement. In Oracle setup, we retrieve evidence base on human arguments queries. System Retrieval Oracle Retrieval Input statement: I believe the government Human argument: Giving up privacy means should be allowed to view my emails giving up some of your right to free speech.
Initialize first layers of encoders and argument decoders Warm up the system with a good argumentation language model All training data + non-politics threads + non-root replies Sequence-to-sequence without evidence sentences or keyphrases System vs. Oracle retrieval In reality, during test time evidence can only be obtained by input statement. In Oracle setup, we retrieve evidence base on human arguments queries. System Retrieval Oracle Retrieval Input statement: I believe the government Human argument: Giving up privacy means should be allowed to view my emails giving up some of your right to free speech.
[]
GEM-SciDuet-train-131#paper-1354#slide-13
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-13
Experiments Models
RETRIEVAL-BASED: concatenate evidence sentences SEQ2SEQ: encode the input statement only SEQ2SEQ encode evidence: encode statement and evidence sentences SEQ2SEQ encode keyphrase: encode statement and keyphrases Stronger baseline, because keyphrases are actually reused by human arguments. DEC-SHARED: Argument decoder initialized by keyphrase decoder <phz> right to privacy<phz> I believe the <evd> edward snowden <arg> you are ignoring the Attention DEC-SEPARATE attend keyphrase: with attention on keyphrase decoder Attention <arg> you are ignoring the DEC-SEPARATE: Argument decoder initialized by encoder
RETRIEVAL-BASED: concatenate evidence sentences SEQ2SEQ: encode the input statement only SEQ2SEQ encode evidence: encode statement and evidence sentences SEQ2SEQ encode keyphrase: encode statement and keyphrases Stronger baseline, because keyphrases are actually reused by human arguments. DEC-SHARED: Argument decoder initialized by keyphrase decoder <phz> right to privacy<phz> I believe the <evd> edward snowden <arg> you are ignoring the Attention DEC-SEPARATE attend keyphrase: with attention on keyphrase decoder Attention <arg> you are ignoring the DEC-SEPARATE: Argument decoder initialized by encoder
[]
GEM-SciDuet-train-131#paper-1354#slide-14
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-14
Automatic Evaluation Generation Quality
BLEU: n-gram precision based measure METEOR: unigram precision and recall based on alignment Gold-standard: user generated arguments Multi-reference setup: best aligned one -> multiple plausible arguments exist * BLEU/METEOR: The higher the better Our models have better precision. The generated content are more likely to be found in human arguments. Retrieval baseline generation has better METEOR, which considers both precision and recall. w/System Retrieval w/ Oracle Retrieval BLEU-2 METEOR Length BLEU-2 METEOR Length
BLEU: n-gram precision based measure METEOR: unigram precision and recall based on alignment Gold-standard: user generated arguments Multi-reference setup: best aligned one -> multiple plausible arguments exist * BLEU/METEOR: The higher the better Our models have better precision. The generated content are more likely to be found in human arguments. Retrieval baseline generation has better METEOR, which considers both precision and recall. w/System Retrieval w/ Oracle Retrieval BLEU-2 METEOR Length BLEU-2 METEOR Length
[]
GEM-SciDuet-train-131#paper-1354#slide-15
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-15
Automatic Evaluation Topic Relevance
Motivation: Generic arguments can still have high BLEU scores. E.g. I dont agree with you., You are missing evidence., This is wrong. Semantic similarity model [Huang et al, 2013] Represents the semantic relatedness of two pieces of text Model tuned on training set Evaluated by mean reciprocal ranking (MRR) and Precision at 1 (P@1) * The higher the better
Motivation: Generic arguments can still have high BLEU scores. E.g. I dont agree with you., You are missing evidence., This is wrong. Semantic similarity model [Huang et al, 2013] Represents the semantic relatedness of two pieces of text Model tuned on training set Evaluated by mean reciprocal ranking (MRR) and Precision at 1 (P@1) * The higher the better
[]
GEM-SciDuet-train-131#paper-1354#slide-16
1354
Neural Argument Generation Augmented with Externally Retrieved Evidence
High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279 ], "paper_content_text": [ "Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .", "A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.", "For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.", "In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .", "Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.", "We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.", "As a consequence, there exists a pressing need for automating the argument construction process.", "To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .", "Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.", "In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.", "One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.", "Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.", "This makes them unwieldy to be adapted for new domains.", "In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.", "To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.", "Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.", "Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".", "Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".", "Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .", "Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .", "Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.", "Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.", "We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.", "Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.", "The rest of this paper is organized as follows.", "Section 2 highlights the roadmap of our system.", "The dataset used for our study is introduced in Section 3.", "The model formulation and retrieval methods are detailed in Sections 4 and 5.", "We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.", "Related work is discussed in Section 9.", "Finally, we conclude in Section 10.", "Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .", "Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.", "A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.", "The generation model then encodes the statement and the evidence with a shared encoder in sequence.", "Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.", "Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.", "Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.", "Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.", "In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .", "Figure 2: Overview of our system pipeline (best viewed in color).", "Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).", "A reranking module then outputs top sentences as evidence.", "The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.", "During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.", "Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.", "After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.", "We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.", "A Focused Domain Dataset.", "The current dataset contains diverse domains with unbalanced numbers of arguments.", "We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.", "However, topic labels are not available for the discussions.", "We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .", "Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.", "Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.", "6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.", "Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.", "7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.", "The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.", "Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.", "Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.", "First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.", "By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.", "Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.", "Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.", "Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.", "A special token <evd> is inserted between x O and x E .", "Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.", "The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.", "The hidden states are computed as done in Bahdanau et al.", "(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.", "Encoder.", "A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.", "For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].", "Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.", "The last hidden state of encoder is used to initialize both decoders.", "In our model the encoder is shared by argument and keyphrase decoders.", "Decoders.", "Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .", "The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.", "α is a weighting parameter, and it is set as 0.5 in our experiments.", "Attention over Both Input and Keyphrases.", "Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.", "We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.", "Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.", "2.", "Decoder Sharing.", "We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.", "A special token <arg> is inserted between the two sequences, indicating the start of argument generation.", "Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.", "We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.", "Hybrid Beam Expansion.", "In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.", "However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.", "Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.", "This leads to a more diverse set of hypotheses.", "Segment-based Reranking.", "We also propose to rerank the beams every p steps based on beam's coverage of content words from input.", "Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.", "k = 10, n = 3, and p = 10 are used for experiments.", "The effect of parameter selection is studied in Section 7.", "Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.", "Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.", "A dump of December 21, 2016 was downloaded.", "For training, evidence sentences are retrieved with queries constructed from target user arguments.", "For test, queries are constructed from OP.", "Article Retrieval.", "We first create an inverted index lookup table for Wikipedia as done in Chen et al.", "(2017) .", "For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.", "Therefore, multiple passes of retrieval will be conducted if more than one query is created.", "Specifically, we first collect topic signature words of the post.", "Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.", "We treat posts from other discussions in our dataset as background.", "For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.", "For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).", "Top five retrieved articles with highest TF-IDF similarity scores are kept per query.", "Sentence Reranking.", "The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.", "Up to 100 top ranked paragraphs with positive scores are retained.", "These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.", "We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.", "Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .", "• Keep phrases of length between 2 and 10 that overlap with content words in the argument.", "• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.", "The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.", "6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.", "We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.", "This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.", "In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.", "Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).", "Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.", "We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.", "We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.", "Gradient clipping is also applied with the maximum norm of 2.", "The input and output vocabulary sizes are both 50k.", "Curriculum Training.", "We train the models in three stages where the truncated input and output lengths are gradually increased.", "Details are listed in Table 2 .", "Importantly, this strategy allows model training to make rapid progress during early stages.", "Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.", "The model converges after about 10 epochs in total with pre-training initialization, which is described below.", "Adding Pre-training.", "We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.", "After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).", "Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.", "We describe more detailed results in the supplementary material.", "Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.", "We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.", "All seq2seq models use a regular beam search decoder with the same beam size as ours.", "Variants of Our Models.", "We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).", "For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).", "System vs. Oracle Retrieval.", "For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).", "We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.", "Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.", "Human arguments are used as the gold-standard.", "Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.", "We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.", "For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.", "As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.", "Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.", "Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.", "Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.", "Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.", "The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.", "Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.", "Decoder Strategy Comparison.", "We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).", "From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).", "For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.", "Reranking with smaller step size yields better results.", "p = 5, can generally lead to better METEOR scores.", "Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.", "with a smaller k).", "Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.", "We believe that good arguments should include content that addresses the given topic.", "Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.", "To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.", "(2013) .", "A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.", "A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.", "For model learning, we further divide our current training data into training, developing, and test sets.", "For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.", "details are included in the supplementary material.", "We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.", "Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.", "Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.", "Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .", "The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.", "Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.", "For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.", "This further implies that our model generates more topic-relevant content.", "Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.", "Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).", "to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.", "30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.", "Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.", "8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.", "Sample arguments are displayed in Figure 4 .", "Further Discussion Keyphrase Generation Analysis.", "Here we provide further analysis over the generated keyphrases by our separate decoder model.", "First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).", "Furthermore, 36% of generated keyphrases are reused by our system arguments.", "With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.", "For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.", "Although not used, they suggest the topics that the argument should stay on.", "Sample Arguments and Future Directions.", "As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.", "That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.", "Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.", "It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.", "In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.", "Our Model: There are two reasons you are so far beyond.", "There is no reason to see the military army.", "You can infer what they assume it, so they tend to protect up.", "There.", "If we assume it were bad, they can not be controlled.", "So we're talking going to ignore the wars of the world.", "The fact of the matter is not bad.", "I would also assume it.", "However, the government can not be reminded of the world.", "Seq2seq + evd: 1.", "The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.", "In Russia, there is no way to continue to control the military.", "Seq2seq + KP: I don't think it's a bit.", "I don't think you're forgetting.", "I don't want to see the Syrian American dream in the US.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American dream of American citizens.", "I don't see the Syrian American military in the world.", "I don't think it wouldn't be true.", "Input: Voting for either Hillary or Trump is morally reprehensible.", "Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.", "Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?", "Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.", "Our Model: He's going to happen.", "You're overestimating the more premise.", "He's already overestimating to this hypothetical party.", "He's already killed over the supreme court positions.", "Looking for Bernie.", "This is the same reasons: Clinton is not the supreme court presidency.", "You've already sent Trump's right to vote.", "The fact is that he's going to happen.", "He's already going to notice.", "However, it's a long way.", "Seq2seq + evd: 1.", "If they don't vote the vote, they would not be able to vote for any candidate.", "They don't have the same effect on their political power.", "They are not voting for them.", "Seq2seq + KP: I agree with your view.", "I don't agree with you.", "I don't think it's easy to appeal to the Clintons.", "If you don't want to do this?", "Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.", "Only the main thesis is shown for the input OP.", "System generations are manually detokenized and capitalized.", "of the world\", as discussed in the first example.", "Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.", "As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.", "For future work, generation models with a better control on linguistic style need to be designed.", "As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.", "Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .", "While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.", "Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .", "For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.", "It however only outputs a text plan, mainly relying on heuristic rules.", "Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.", "This work aims to close the gap by proposing an end-to-end trained argument construction framework.", "Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.", "Wachsmuth et al.", "(2017) build a search engine from arguments collected from various online debate portals.", "After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .", "Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.", "To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.", "Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.", "We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.", "Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.", "Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5.1", "5.2", "6.1", "6.2", "6.3", "7.1", "7.2", "7.3", "8", "9", "10" ], "paper_header_content": [ "Introduction", "Framework", "Data Collection and Processing", "Model", "Model Formulation", "Hybrid Beam Search Decoding", "Retrieval Methodology", "Gold-Standard Keyphrase Construction", "Final Dataset Statistics", "Training Setup", "Baseline and Comparisons", "Automatic Evaluation", "Topic-Relevance Evaluation", "Human Evaluation", "Further Discussion", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-131#paper-1354#slide-16
Human Evaluation
Motivation: Automatic evaluation cant really evaluate the overall coherence and informativeness of the generation. 3 trained judges that are fluent in English 3 systems: RETRIEVAL-BASED, SEQ2SEQ, OUR MODEL Aspects (each on a scale of 1 to 5, the higher the better) Grammaticality: if the output is fluent and grammatical English Informativeness: whether the output is informative or generic Relevance: it the output is on-topic and of correct stance 1 (low quality) 5 (high quality) checked speed limit criminal lanes taxi to the Food security is not an issue of how much food we produce. Informativeness I dont agree with you. Israeli are under a much more persistent and realistic security threat. (Topic: racial profiling) Gun control deters crime. Minority groups who endure everyday discrimination often suffer high rates of chronic diseases. System Grammaticality Informativeness Relevance - Human judges favor RETRIEVAL-BASED model in all aspects. - RETRIEVAL-BASED is human-written and relevant. - OUR MODEL is favored over SEQ2SEQ except Grammaticality.
Motivation: Automatic evaluation cant really evaluate the overall coherence and informativeness of the generation. 3 trained judges that are fluent in English 3 systems: RETRIEVAL-BASED, SEQ2SEQ, OUR MODEL Aspects (each on a scale of 1 to 5, the higher the better) Grammaticality: if the output is fluent and grammatical English Informativeness: whether the output is informative or generic Relevance: it the output is on-topic and of correct stance 1 (low quality) 5 (high quality) checked speed limit criminal lanes taxi to the Food security is not an issue of how much food we produce. Informativeness I dont agree with you. Israeli are under a much more persistent and realistic security threat. (Topic: racial profiling) Gun control deters crime. Minority groups who endure everyday discrimination often suffer high rates of chronic diseases. System Grammaticality Informativeness Relevance - Human judges favor RETRIEVAL-BASED model in all aspects. - RETRIEVAL-BASED is human-written and relevant. - OUR MODEL is favored over SEQ2SEQ except Grammaticality.
[]