{ "paper_id": "I17-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:39:50.611867Z" }, "title": "Length, Interchangeability, and External Knowledge: Observations from Predicting Argument Convincingness", "authors": [ { "first": "Peter", "middle": [], "last": "Potash", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Lowell", "location": {} }, "email": "ppotash@cs.uml.edu" }, { "first": "Robin", "middle": [], "last": "Bhattacharya", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Lowell", "location": {} }, "email": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Lowell", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work, we provide insight into three key aspects related to predicting argument convincingness. First, we explicitly display the power that text length possesses for predicting convincingness in an unsupervised setting. Second, we show that a bag-of-words embedding model posts state-of-the-art on a dataset of arguments annotated for convincingness, outperforming an SVM with numerous hand-crafted features as well as recurrent neural network models that attempt to capture semantic composition. Finally, we assess the feasibility of integrating external knowledge when predicting convincingness, as arguments are often more convincing when they contain abundant information and facts. We finish by analyzing the correlations between the various models we propose.", "pdf_parse": { "paper_id": "I17-1035", "_pdf_hash": "", "abstract": [ { "text": "In this work, we provide insight into three key aspects related to predicting argument convincingness. First, we explicitly display the power that text length possesses for predicting convincingness in an unsupervised setting. Second, we show that a bag-of-words embedding model posts state-of-the-art on a dataset of arguments annotated for convincingness, outperforming an SVM with numerous hand-crafted features as well as recurrent neural network models that attempt to capture semantic composition. Finally, we assess the feasibility of integrating external knowledge when predicting convincingness, as arguments are often more convincing when they contain abundant information and facts. We finish by analyzing the correlations between the various models we propose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Predicting argument convincingness has mostly been studied in relation to the overall quality of a persuasive essay (Attali and Burstein, 2004; Landauer, 2003; Shermis et al., 2010) , with a recent focus specifically on predicting argument strength (Persing and Ng, 2015; Wachsmuth et al., 2016) . Zhang et al. (2016) have also attempted to predict argument convincingness, in the form of predicting debate winners. Unfortunately, these are very rare argumentative formats that are seldom encountered in everyday life. In practice, at least at the moment, we tend to digest a large quantity of our information from social media and engage in a tremendous amount of interpersonal communication using it. Since, in social media, communications are roughly a single paragraph, analyzing arguments in a persuasive essay or oxford-style debate is not applicable to our primary means of community engagement. Presenting an entire convincing argument within a single paragraph can be an invaluable skill in the modern world. This paper seeks to improve upon previous methodology for predicting argument convincingness.", "cite_spans": [ { "start": 116, "end": 143, "text": "(Attali and Burstein, 2004;", "ref_id": "BIBREF2" }, { "start": 144, "end": 159, "text": "Landauer, 2003;", "ref_id": "BIBREF12" }, { "start": 160, "end": 181, "text": "Shermis et al., 2010)", "ref_id": "BIBREF24" }, { "start": 249, "end": 271, "text": "(Persing and Ng, 2015;", "ref_id": "BIBREF18" }, { "start": 272, "end": 295, "text": "Wachsmuth et al., 2016)", "ref_id": "BIBREF27" }, { "start": 298, "end": 317, "text": "Zhang et al. (2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prompt: Is it better to have a lousy father or to be fatherless? Stance: It is better to have a lousy father.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Argument 2 It is better to have a lousy father because researchers at the McGill University have warned that growing up without a father can permanently change the structure of a child's brain and make him/her more aggressive and angry.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "Having a lousy father is better because when a child does not have a father, it causes him/her to look for a father figure. During such searches, a child may end up getting sexual harassed or being emotionally exploited to various degrees. Table 1 : Example of an argument pair where Argument 1 is more convincing. Habernal and Gurevych (2016b) have recently released a dataset of short, single-paragraph arguments annotated for convincingness, which we will refer to as UKPConvArg. For 16 issues, arguments with the same stance are compared with each other to determine, given a pair of arguments, which one is more convincing. Table 1 provides an example of an argument pair with arguments from the prompt 'Is it better to have a lousy father or to be fatherless'; and the stance: 'It is better to have a lousy father'. In this pair Argument 1 is chosen to be more convincing. Other such issues include: 'Does India have the potential to lead the world?', 'Which web browser is better, Internet Explorer or Mozilla Firefox?', and 'Should physical education be mandatory in schools'. In follow-up work, Habernal and Gurevych (2016a) examined the reasoning behind the annotations in their original corpus. That is, why arguments were selected as more convincing. Overwhelmingly, the reasons could be expressed by the following statement \"Argument X has more details, information, facts or examples / more reasons / better reasoning / goes deeper / is more specific\". Although Habernal and Gurevych (2016b) experimented with two promising models, the models were not intended to directly take into account the reasons why an argument could be more convincing, as expressed in the previous quotation. The primary task of the dataset is, given two arguments with the same stance toward a topic, determine which argument is more convincing -this corresponds to outputting a binary label. Most of our experiments focus on this task, as it was the annotation directive for annotating convincingness in Habernal and Gurevych (2016b) . From the pairwise annotation, they also derived convincingness scores for individual arguments, which is posed as a regression task. We evaluate on this task in Section 3.1.", "cite_spans": [ { "start": 315, "end": 344, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" }, { "start": 1104, "end": 1133, "text": "Habernal and Gurevych (2016a)", "ref_id": "BIBREF8" }, { "start": 1476, "end": 1505, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" }, { "start": 1996, "end": 2025, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 1", "ref_id": null }, { "start": 629, "end": 636, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "In our work, we improve upon the initial experiments of Habernal and Gurevych in 3 ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "(1) we offer heuristic-based methods that requiring no training or fitting of a model to data; (2) we explore modifications of the initial 'deep' model used by Habernal and Gurevych (2016a) , which was a Bidirectional Long Short-Term Memory (BLSTM) network; (3) we test the feasibility of offering factually relevant knowledge in the form of Wikipedia articles related to the argument topics.", "cite_spans": [ { "start": 160, "end": 189, "text": "Habernal and Gurevych (2016a)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "In terms of heuristics, we examine the effectiveness of Metric Entropy (ME) of text to predict convincingness, which is inspired by the notion that written English language is well-formed, as opposed to random. Specifically, high ME corresponds to high randomness. The second heuristic uses a similarity to Wikipedia articles, with the hypothesis that the Wikipedia articles can act as a factual support reference for the arguments. We also hypothesize that Wikipedia articles have the potential to grade the quality of the writing in the arguments, on the assumption that arguments that better match the writing in Wikipedia articles are more likely to exhibit the qualities that make an argument convincing. For all methods that use the presence of Wikipedia articles, we use several variations of a corpus to determine how well the methods leverage topic-specific articles, as opposed to randomly selected articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "In terms of supervised techniques, we first follow previous approaches to classifying paired data that create separate learned representations of elements in a pair that are then concatenated for the final predictive model (Bowman et al., 2015; Mueller and Thyagarajan, 2016; Potash et al., 2016b) . Specifically, we experiment with creating separate representations using either a BLSTM or summing individual token embeddings. We then propose modifications of the supervised models to leverage external data. The models grow with increasing complexity, approaching a form of Memory Network (Sukhbaatar et al., 2015) that computes a weighted sum of representations of Wikipedia articles.", "cite_spans": [ { "start": 223, "end": 244, "text": "(Bowman et al., 2015;", "ref_id": "BIBREF5" }, { "start": 245, "end": 275, "text": "Mueller and Thyagarajan, 2016;", "ref_id": "BIBREF14" }, { "start": 276, "end": 297, "text": "Potash et al., 2016b)", "ref_id": "BIBREF20" }, { "start": 591, "end": 616, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "Our experimental results reveal several important insights into how to approach predicting convincingness. We summarize our findings as follows: 1) Unsupervised text length is an extremely competitive baseline that performs on par with highly-engineered classifiers and deep learning models; 2) The current state-of-the-art approach treats tokens as interchangeable, bypassing the need to model compositionality; 3) Wikipedia articles can provide meaningful external knowledge, though, naive models have trouble dealing with the noise in a large corpus of documents, whereas a model that attends to the Wikipedia corpus is better equipped to handle the noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument 1", "sec_num": null }, { "text": "Habernal and Gurevych (2016b) present two methods in their dataset paper: (1) an SVM with numerous hand-crafted features; (2) a BLSTM that only uses word embeddings as input. Aside from the original corpus authors, only one other work has tested on the UKPConvArg dataset. Chalaguine and Schulz (2017) use a feature-selection method to determine the raw feature representation that serves as input into a feed-forward neural network. The authors conduct a thorough ablation study of the performance of individual feature types. The authors' best model records an accuracy of .766, compared to .781 and .757 of Habernal and Gurevych's SVM and BLSTM, re-spectively. Although the authors make an effort to determine the influence of individual feature type, their work continues to use supervised methods, which obscures the pure predictive power of individual features/metrics. There are few datasets annotated for the convincingness of arguments. Zhang et al. (2016) published a dataset of debate transcripts, annotated with audience polling that occurs before and after the debate. In terms of argumentation, the key distinction between this dataset and that of Habernal and Gurevych (2016b) is that in the debate dataset, the debate teams have opposing stances on a topic, whereas Habernal and Gurevych's dataset has labels for arguments with the same stance towards a topic. Persing and Ng (2015) constructed a corpus of persuasive essays annotated for the essays' argument strength, which is slightly different to other annotated persuasive essay corpora, which have more of a focus on overall writing quality.", "cite_spans": [ { "start": 946, "end": 965, "text": "Zhang et al. (2016)", "ref_id": "BIBREF30" }, { "start": 1162, "end": 1191, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" }, { "start": 1377, "end": 1398, "text": "Persing and Ng (2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "NLP datasets involving the processing of text pairs have become more prevalent. Examples include predicting textual entailment (Marelli et al., 2014; Bowman et al., 2015) , predicting semantic relatedness/similarity (Marelli et al., 2014; Agirre et al., 2016) , and predicting humor (Potash et al., 2016b; Shahaf et al., 2015) . These tasks present interesting challenges from a modeling perspective, as methods must allow for semantic comparison between the texts.", "cite_spans": [ { "start": 127, "end": 149, "text": "(Marelli et al., 2014;", "ref_id": "BIBREF13" }, { "start": 150, "end": 170, "text": "Bowman et al., 2015)", "ref_id": "BIBREF5" }, { "start": 216, "end": 238, "text": "(Marelli et al., 2014;", "ref_id": "BIBREF13" }, { "start": 239, "end": 259, "text": "Agirre et al., 2016)", "ref_id": "BIBREF1" }, { "start": 283, "end": 305, "text": "(Potash et al., 2016b;", "ref_id": "BIBREF20" }, { "start": 306, "end": 326, "text": "Shahaf et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Although relatively rare in the argument mining community, leveraging external knowledge sources is ubiquitous for the task of questionanswering (Kolomiyets and Moens, 2011) , using information retrieval techniques to mine the available documents for answers. Work such as Berant et al. (2013) forms a knowledge base from external documents, and maps queries to knowledgebase entries. Weston et al. (2014) have proposed a neural network-based approach for large-scale question-answering. In the argument mining community, Rinott et al. (2015) created a dataset for predicting potential support clauses for argumentative topics, while Braunstain et al. (2016) rank Wikipedia sentences for supporting answers made by online user answers. Conversely, Wachsmuth et al. (2017) approach the problem of measuring relevance amongst arguments themselves, proposing a methodology based on PageRank (Page et al., 1999) .", "cite_spans": [ { "start": 145, "end": 173, "text": "(Kolomiyets and Moens, 2011)", "ref_id": "BIBREF11" }, { "start": 385, "end": 405, "text": "Weston et al. (2014)", "ref_id": null }, { "start": 634, "end": 658, "text": "Braunstain et al. (2016)", "ref_id": "BIBREF6" }, { "start": 748, "end": 771, "text": "Wachsmuth et al. (2017)", "ref_id": "BIBREF28" }, { "start": 888, "end": 907, "text": "(Page et al., 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As Habernal and Gurevych (2016b) note in their paper, comparing the SVM and BLSTM systems, it is desirable for methodologies to require minimal preprocessing of text. Along those lines, methods that use heuristics can circumvent the need for supervised training. We refer to the models in this section as heuristic models, as opposed to unsupervised models, because they do not fit themselves to data -they merely compare various metric values to determine convincingness. We experiment with two types of heuristics: ME and Wikipedia similarity. The motivation of these heuristics is as follows: Metric Entropy has previously been applied to the task of predicting tweet deletion (Potash et al., 2016a) , with the idea that tweets with high ME are likely to be spam. Moreover, ME conveys how well-formed the language is in a piece of text, since higher ME means a higher randomness in the language. Conversely, Wikipedia similarity attempts to use external knowledge to measure the factual validity of the arguments, but also potentially measuring the writing quality of the arguments.", "cite_spans": [ { "start": 680, "end": 702, "text": "(Potash et al., 2016a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Heuristic Methods", "sec_num": "3" }, { "text": "The Shannon Entropy of a text T containing a set of characters C is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Entropy", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(T ) = \u2212 c\u2208C P (c) log 2 P (c)", "eq_num": "(1)" } ], "section": "Metric Entropy", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Entropy", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (c) = f req(c) len(T )", "eq_num": "(2)" } ], "section": "Metric Entropy", "sec_num": "3.1" }, { "text": "and f req(c) is the number of times c appears in T . Consequently, ME is the Shannon entropy divided by the text length, len(T ). Since ME produces a continuous output, it is sensible to evaluate it using the regression task from Habernal and Gurevych (2016b) . Because ME is a combination of Shannon Entropy and text length, we also evaluate their effectiveness separately as well. We admit, however, that our initial experiments only included ME and Shannon Entropy, but given the vastly different performance of the two metrics, we decided to test length on its own as well.", "cite_spans": [ { "start": 230, "end": 259, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Entropy", "sec_num": "3.1" }, { "text": "Suppose we have vector representations of an argument a and a Wikipedia article w. The similarity score, sim(a, w) is simply the dot product of the two representations, aw T . Therefore, given a corpus of Wikipedia articles W, we define the Wikipedia Similarity Score, W SS of an argument a as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Similarity", "sec_num": "3.2" }, { "text": "W SS(a) = w\u2208W aw T (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Similarity", "sec_num": "3.2" }, { "text": "For pairwise prediction, we predict the argument with the higher score as the more convincing argument. We consider two possible representations for texts: 1) term-frequency (TF) count, and 2) Summing the embeddings of all the tokens in the text. For the TF representation, we use the CountVectorizer class from Scikit-learn (Pedregosa et al., 2011) to process the text and create the appropriate representation. For the embedding representation, we use GloVe (Pennington et al., 2014) 300 dimensions learned from the Common Crawl corpus with 840 billion tokens.", "cite_spans": [ { "start": 460, "end": 485, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Similarity", "sec_num": "3.2" }, { "text": "Our Wikipedia data is from the May 20th, 2017 dump 1 . We clean the raw Wikipedia data using gensim (\u0158eh\u016f\u0159ek and Sojka, 2010). We experiment with three different Wikipedia corpora. The first corpus has a set of 30 hand-picked Wikipedia articles, chosen to be of the same subject matter of the various topics in the argument convincingness corpora. We refer to this corpus as Wiki hand-picked (hp). The second corpus contains 38k random Wikipedia articles, chosen to be approximately the length of the hand-picked articles. The motivation behind the second corpus is to determine how valuable the topic-specific information is for assessing the validity of the arguments. The second corpus also simulates a situation where a model accesses an arbitrary knowledge base, as opposed to one that is hand-selected. We refer this corpus as Wiki random (ran). The third corpus combines the first two corpora, with the goal of determining how well the heuristic method can deal with the potential 'noise' of randomly chosen Wikipedia articles. We refer to this corpus as Wiki hp+ran.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Similarity", "sec_num": "3.2" }, { "text": "Habernal and Gurevych (2016b) propose two supervised experiments for predicting argument convincingness: an SVM with numerous hand-crafted features, and a BLSTM that only uses word embeddings as input. While our heuristic methods show promising results, they do not yet achieve state-of-the-art on the argument convincingness dataset. In this section, we motivate our supervised experiments with a combination of results from Section 3.2 and Habernal and Gurevych. All models have the same cost function, which is the binary cross-entropy of the training set, based on the sigmoid activation of a continuous value from a 1-dimensional dense layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Methods", "sec_num": "4" }, { "text": "The BSLTM model that Habernal and Gurevych (2016b) propose concatenates the text of the argument pairs, separated by a special delimiter. This single sequence is then run over by forward and backward LSTMs to produce the BLSTM embedding that is then used for logistic regression. We propose to model each argument in the argument pair separately, creating a representation for each argument pair that is then concatenated together for logistic regression output. The term 'Siamese' refers to the fact that the representations are created separately (we adopt the terminology from Mueller and Thyagarajan (2016)). Each argument goes through a BLSTM to produce its individual representation, using GloVe vectors as input to the BLSTM.", "cite_spans": [ { "start": 21, "end": 50, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Siamese BLSTM", "sec_num": "4.1" }, { "text": "While a BLSTM model is very logical for most language tasks, given its sequential nature, work such as Joulin et al. (2016) shows that simply summing individual token embeddings can be extremely competitive for the task of text classification. Furthermore, in the current climate of increasingly complex deep learning models, it is important to continue to compare to simpler models. For this method, we represent an argument in an argument pair as the sum of its tokens' embeddings. Given the TF representation of a set of texts Table 3 : Results of Wikipedia similarity experiments, using either a term-frequency representation (TF) or a sum of word embeddings (E). We experiment with three types of Wikipedia corpora: 30 hand-picked articles chosen to been highly relevant to the argument topics (hp); roughly 38k randomly chosen articles (ran); a combination of the first two corpora (hp+ran).", "cite_spans": [ { "start": 103, "end": 123, "text": "Joulin et al. (2016)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 530, "end": 537, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Siamese BOW Embedding", "sec_num": "4.2" }, { "text": "T in matrix format A and a corresponding embedding matrix E, the BOW Embedding, BOW E, representation is equivalent to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Siamese BOW Embedding", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "BOW E(T ) = AE", "eq_num": "(4)" } ], "section": "Siamese BOW Embedding", "sec_num": "4.2" }, { "text": "For our application, our input will have two matrices, T l and T r , representing the left and right arguments in the pair. Once the individual representations are created, as with the Siamese BLSTM, we concatenate them together as the input for lo-gistic regression. Lastly, instead of continuing to train the initialized embedding matrix E, we fix E, calling it E f ixed , and pass it through a fullyconnected layer, W emb ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Siamese BOW Embedding", "sec_num": "4.2" }, { "text": "E learned = E f ixed W emb (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Siamese BOW Embedding", "sec_num": "4.2" }, { "text": "Thus, E learned replaces E in Equation 4. Because we are summing embedding vectors to create the representation, the values of representations' dimensions could become large, causing a dramati-cally increased loss. While such methods as gradient clipping and gradient normalization could be used, we found it simple enough to divide the representation by 100.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Siamese BOW Embedding", "sec_num": "4.2" }, { "text": "We now begin to modify the methodology described in Section 3.2 to add an increasing amount of complexity to better integrate the Wikipedia articles. The first model we propose uses the representations from Equation 4 to represent the arguments and Wikipedia articles, however, it is computed slightly differently for the arguments and wikipedia articles. While the argument representations use E learned , the Wikipedia articles use E f ixed , and then the result of BOW E(T ) passes through a fully-connected layer, W wiki . Just as we artificially normalized the argument representations, we divide the Wikipedia representations by 10,000, due to their greatly increased length compared to the argument text. Once we have the individual representations, we compute a similarity score as done in Equation 3. The one difference, though, is that we apply tanh to the result of the dot product to keep the summation in a manageable range, which aids training. The resulting similarity scores, one for each argument in the pair, become the features for a 2-dimensional logistic regression model. This model does not use dropout at the fully-connected layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Wikipedia Similarity", "sec_num": "4.3" }, { "text": "The model from Section 4.3 gives equal importance to the similarity scores from all Wikipedia articles. However, it's more intuitive for more relevant articles to have more importance. Therefore, we construct a model similar to the endto-end Memory Network from Sukhbaatar et al. (2015) . We create a weight for each score (also interpretable as a probability score P j ) for each Wikipedia article, w i , and argument, a j , as 2 :", "cite_spans": [ { "start": 262, "end": 286, "text": "Sukhbaatar et al. (2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P j (w i ) = softmax(a j w T i )", "eq_num": "(6)" } ], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "which is used to create a weighted sum of the Wikipedia articles, s j , for each argument j:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s j = |W | i P j (w i )w i", "eq_num": "(7)" } ], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "We create the final representation, o j , for argument j as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "o j = a j + s j (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "which is the representation that is the input to the logistic regression layer (one for each argument in the pair).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Network with Wikipedia", "sec_num": "4.4" }, { "text": "In each table that presents results, bold face indicates that a given system performed highest on a given topic within that table. An asterisk indicates that a given system performed highest on a given topic across all tables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Results of our ME experiments are shown in Table 2. We present the results on the regression task. The results of the Wikipedia similarity experiments are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 171, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Heuristic Methods", "sec_num": "5.1" }, { "text": "Results of our supervised experiments are shown in Tables 4 and 5 . We present the results of the Siamese BLSTM (SBLSTM), Siamese BOW Embeddings (SBOWE), Supervised Wikipedia similarity (SWS), and Memory Network with Wikipedia (MNW). Each model that uses Wikipedia articles is run with Wiki hp, Wiki ran, and Wiki hp+ran, as described in Section 3.2. All reported results are the average of three different runs. We report the accuracy on each topic, as well as the macro average across all topics. We compare our results with the SVM and BLSTM models from Habernal and Gurevych (2016b) in Table 4 . All models have dropout (Srivastava et al., 2014) of 0.5 at the dense layer (except for the model described in Section 4.3) and use a batch size of 32, as done by Habernal and Gurevych (2016b) in their BLSTM model. All models are implemented in TensorFLow (Abadi et al., 2016) and train for four epochs. The entire dataset has 11,650 argument pairs across all 32 topics. Since one topic is held-out for testing at a time, there is on average an 11,286/364 train/test split.", "cite_spans": [ { "start": 557, "end": 586, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" }, { "start": 624, "end": 649, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF25" }, { "start": 763, "end": 792, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" }, { "start": 856, "end": 876, "text": "(Abadi et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 51, "end": 65, "text": "Tables 4 and 5", "ref_id": "TABREF3" }, { "start": 590, "end": 597, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Supervised Methods", "sec_num": "5.2" }, { "text": "First, it is rather remarkable that text length alone, as a stand-alone metric, is able to record state-of- Habernal and Gurevych (2016b) .", "cite_spans": [ { "start": 108, "end": 137, "text": "Habernal and Gurevych (2016b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Heuristic Methods", "sec_num": "6.1" }, { "text": "the-art results on the regression task. Although Chalaguine and Schulz (2017) directly showed the power of text length in a supervised setting, our results show an even simpler method for producing predictions on par with the previous state-ofthe-art. There is intuitive reasoning for this result, since, as mentioned in Section 1, arguments are predominantly more convincing when they provide more; more facts, more information, more depth, etc. When evaluated on the pairwise binary prediction task, Metric Entropy and text length record 77.2% and 77.3% accuracy, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Methods", "sec_num": "6.1" }, { "text": "Reviewing the Wikipedia similarity results, it is evident that the BOW embedding representation does offer greater predictive power when compared to the term-frequency representation. This unsupervised method even outperforms the supervised methods BLSTM and SBLSTM. Furthermore, compared to other methods that use Wikipedia articles, this method is more insensitive to the content of the articles, as it actually shows a very slight improvement when the hand-picked articles are not present, which is the opposite of all the other Wikipedia-based methods. Table 5 : We experiment with three types of Wikipedia corpora: 30 hand-picked articles chosen to been highly relevant to the argument topics (hp); roughly 38k randomly chosen articles (ran); a combining the first two corpora (hp+ran).", "cite_spans": [], "ref_spans": [ { "start": 557, "end": 564, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Heuristic Methods", "sec_num": "6.1" }, { "text": "The first result to note is that the BOW Embedding model posts a new state-of-the-art on the dataset. This shows that the current best approach to predicting argument convincingness treats word order as interchangeable. Although, it is reasonable to surmise that facts and information are dependent on local compositionality, current methods to model such linguistic phenomena under-perform. When comparing supervised models that integrate Wikipedia articles, we see that the MNW model is better equipped to handle the noise from a large corpus of documents, when compared to the SWS results, which shows roughly a 1% drop in accuracy when the ran corpus is added to the hp corpus. Table 6 : Correlations between systems. Bold indicates the highest correlation for a given row.", "cite_spans": [], "ref_spans": [ { "start": 682, "end": 689, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Supervised Methods", "sec_num": "6.2" }, { "text": "the main predictive power of the SVM model can be distilled into using the text length to predict argument convincingness. What is perhaps more surprising is how high LEN correlates with WS-E. This could potentially be explained by the fact that articles with more words will sum together more embeddings, resulting in vectors with larger norms, which create larger dot-products when taken with the argument representations. However, the same argument can be made for the TF representation, so a more valid reason remains to be seen (note though that SBOWE and WS-TF have a low correlation with LEN). Secondly, we see that all models based on BOW embeddings have a very high correlation with each other, which is an intuitive finding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Correlations", "sec_num": "6.3" }, { "text": "In this work we have shown three key insights into the task of predicting argument convincingness: 1) Heuristic text length is an extremely competitive baseline that performs on par with highlyengineered classifiers and deep learning models;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "2) The current state-of-the-art approach treats tokens as interchangeable, bypassing the need to model compositionality; 3) Wikipedia articles can provide meaningful external knowledge, though, naive models have trouble dealing with the noise in a large corpus of document, whereas a model that attends to the Wikipedia corpus is better equipped to handle the noise. Future work can focus on models that better handle compositionality, as well as integration of external knowledge, with an aim to surpass our new state-of-the-art on the corpus. One simple way to potentially enhance our MNW model is to perform multiple hops, a technique shown to greatly increase performance when using Memory Networks for other applica-tions (Sukhbaatar et al., 2015) .", "cite_spans": [ { "start": 727, "end": 752, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We note that we also experimented with an attention mechanism more akin that ofBahdanau et al. (2014), which uses a latent vector v to dot product with the sum aj + wi. However, this yielded the same results as the currently presented model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the U.S. Army Research Office under Grant No. W911NF-16-1-0174.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.04467" ] }, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Mona", "middle": [ "T" ], "last": "Cer", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "German", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2016, "venue": "SemEval@ NAACL-HLT", "volume": "", "issue": "", "pages": "497--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Daniel M Cer, Mona T Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, Ger- man Rigau, and Janyce Wiebe. 2016. Semeval- 2016 task 1: Semantic textual similarity, monolin- gual and cross-lingual evaluation. In SemEval@ NAACL-HLT, pages 497-511.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automated essay scoring with e-rater R v. 2.0", "authors": [ { "first": "Yigal", "middle": [], "last": "Attali", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" } ], "year": 2004, "venue": "ETS Research Report Series", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yigal Attali and Jill Burstein. 2004. Automated essay scoring with e-rater R v. 2.0. ETS Research Report Series, 2004(2).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semantic parsing on freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP, volume 2, page 6.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Gabor", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.05326" ] }, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Supporting human answers for advice-seeking questions in cqa sites", "authors": [ { "first": "Liora", "middle": [], "last": "Braunstain", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Kurland", "suffix": "" }, { "first": "David", "middle": [], "last": "Carmel", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Shtok", "suffix": "" } ], "year": 2016, "venue": "European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "129--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liora Braunstain, Oren Kurland, David Carmel, Idan Szpektor, and Anna Shtok. 2016. Supporting human answers for advice-seeking questions in cqa sites. In European Conference on Information Retrieval, pages 129-141. Springer, Cham.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Assessing convincingness of arguments in online debates with limited number of features", "authors": [ { "first": "Lisa", "middle": [], "last": "Andreevna Chalaguine", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Student Research Workshop at the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Andreevna Chalaguine and Claudia Schulz. 2017. Assessing convincingness of arguments in online de- bates with limited number of features. In Proceed- ings of the Student Research Workshop at the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "What makes a convincing argument? empirical analysis and detecting attributes of convincingness in web argumentation", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1214--1223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal and Iryna Gurevych. 2016a. What makes a convincing argument? empirical analysis and detecting attributes of convincingness in web ar- gumentation. In EMNLP, pages 1214-1223.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Which argument is more convincing? analyzing and predicting convincingness of web arguments using bidirectional lstm", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "In ACL", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal and Iryna Gurevych. 2016b. Which ar- gument is more convincing? analyzing and predict- ing convincingness of web arguments using bidirec- tional lstm. In ACL (1).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A survey on question answering technology from an information retrieval perspective", "authors": [ { "first": "Oleksandr", "middle": [], "last": "Kolomiyets", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2011, "venue": "Information Sciences", "volume": "181", "issue": "24", "pages": "5412--5434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oleksandr Kolomiyets and Marie-Francine Moens. 2011. A survey on question answering technology from an information retrieval perspective. Informa- tion Sciences, 181(24):5412-5434.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automated scoring and annotation of essays with the intelligent essay assessor", "authors": [ { "first": "K", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "", "middle": [], "last": "Landauer", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K Landauer. 2003. Automated scoring and an- notation of essays with the intelligent essay assessor. Automated essay scoring: A crossdisciplinary per- spective.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A sick cure for the evaluation of compositional distributional semantic models", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014. A sick cure for the evaluation of com- positional distributional semantic models. In LREC, pages 216-223.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Siamese recurrent architectures for learning sentence similarity", "authors": [ { "first": "Jonas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Thyagarajan", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "2786--2792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In AAAI, pages 2786-2792.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The pagerank citation ranking: Bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Scikit-learn: Machine learning in python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dubourg", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825-2830.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Modeling argument strength in student essays", "authors": [ { "first": "Isaac", "middle": [], "last": "Persing", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Persing and Vincent Ng. 2015. Modeling argu- ment strength in student essays.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using topic modeling and text embeddings to predict deleted tweets", "authors": [ { "first": "Peter", "middle": [], "last": "Potash", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Bell", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Harrison", "suffix": "" } ], "year": 2016, "venue": "Proceedings of AAAI WIT-EC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Potash, Eric Bell, and Joshua Harrison. 2016a. Using topic modeling and text embeddings to pre- dict deleted tweets. Proceedings of AAAI WIT-EC.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "# hashtagwars: Learning a sense of humor", "authors": [ { "first": "Peter", "middle": [], "last": "Potash", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1612.03216" ] }, "num": null, "urls": [], "raw_text": "Peter Potash, Alexey Romanov, and Anna Rumshisky. 2016b. # hashtagwars: Learning a sense of humor. arXiv preprint arXiv:1612.03216.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Show me your evidence-an automatic method for context dependent evidence detection", "authors": [ { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Dankin", "suffix": "" }, { "first": "Carlos", "middle": [ "Alzate" ], "last": "Perez", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitesh", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "440--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence-an automatic method for context dependent evidence detection. In EMNLP, pages 440-450.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Inside jokes: Identifying humorous cartoon captions", "authors": [ { "first": "Dafna", "middle": [], "last": "Shahaf", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Horvitz", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mankoff", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "1065--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dafna Shahaf, Eric Horvitz, and Robert Mankoff. 2015. Inside jokes: Identifying humorous cartoon captions. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1065-1074. ACM.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automated essay scoring: Writing assessment and instruction. International encyclopedia of education", "authors": [ { "first": "D", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Shermis", "suffix": "" }, { "first": "Derrick", "middle": [], "last": "Burstein", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "", "middle": [], "last": "Zechner", "suffix": "" } ], "year": 2010, "venue": "", "volume": "4", "issue": "", "pages": "20--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark D Shermis, Jill Burstein, Derrick Higgins, and Klaus Zechner. 2010. Automated essay scoring: Writing assessment and instruction. International encyclopedia of education, 4(1):20-26.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "End-to-end memory networks", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2440--2448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440-2448.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Using argument mining to assess the argumentation quality of essays", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Al", "middle": [], "last": "Khalid", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Khatib", "suffix": "" }, { "first": "", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2016, "venue": "COLING", "volume": "", "issue": "", "pages": "1680--1691", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In COLING, pages 1680-1691.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "pagerank for argument relevance", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Yamen", "middle": [], "last": "Ajjour", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1117--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Benno Stein, and Yamen Ajjour. 2017. pagerank for argument relevance. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics, volume 1, pages 1117-1127.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Conversational flow in oxford-style debates", "authors": [ { "first": "Justine", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ravi", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1604.03114" ] }, "num": null, "urls": [], "raw_text": "Justine Zhang, Ravi Kumar, Sujith Ravi, and Cris- tian Danescu-Niculescu-Mizil. 2016. Conversa- tional flow in oxford-style debates. arXiv preprint arXiv:1604.03114.", "links": null } }, "ref_entries": { "TABREF3": { "html": null, "num": null, "content": "
1.000 | 0.508 0.739 | 0.733 | 0.740 | 0.534 0.785 0.519 | 0.585 | |
LEN | 0.508 | 1.000 0.574 | 0.202 | 0.647 | 0.964 0.585 0.915 | 0.530 |
MNW | 0.739 | 0.574 1.000 | 0.726 | 0.969 | 0.608 0.975 0.465 | 0.651 |
SBLSTM | 0.733 | 0.202 0.726 | 1.000 | 0.722 | 0.277 0.723 0.173 | 0.528 |
SBOWE | 0.740 | 0.647 0.969 | 0.722 | 1.000 | 0.681 0.948 0.552 | 0.683 |
SVM | 0.534 | 0.964 0.608 | 0.277 | 0.681 | 1.000 0.615 0.904 | 0.584 |
SWS | 0.785 | 0.585 0.975 | 0.723 | 0.948 | 0.615 1.000 0.528 | 0.630 |
WS-E | 0.519 | 0.915 0.465 | 0.173 | 0.552 | 0.904 0.528 1.000 | 0.505 |
WS-TF | 0.585 | 0.530 0.651 | 0.528 | 0.683 | 0.584 0.630 0.505 | 1.000 |