{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:53.443299Z" }, "title": "Argument Mining on Twitter: A Case Study on the Planned Parenthood Debate", "authors": [ { "first": "Muhammad", "middle": [], "last": "Mahad", "suffix": "", "affiliation": { "laboratory": "NAVER AI Lab", "institution": "University of Richmond", "location": {} }, "email": "" }, { "first": "Afzal", "middle": [], "last": "Bhatti", "suffix": "", "affiliation": { "laboratory": "NAVER AI Lab", "institution": "University of Richmond", "location": {} }, "email": "mahad.bhatti@richmond.edu" }, { "first": "Ahsan", "middle": [], "last": "Suheer", "suffix": "", "affiliation": { "laboratory": "NAVER AI Lab", "institution": "University of Richmond", "location": {} }, "email": "ahsansuheer.ahmad@richmond.edu" }, { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "", "affiliation": { "laboratory": "NAVER AI Lab", "institution": "University of Richmond", "location": {} }, "email": "park@joonsuk.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Twitter is a popular platform to share opinions and claims, which may be accompanied by the underlying rationale. Such information can be invaluable to policy makers, marketers and social scientists, to name a few. However, the effort to mine arguments on Twitter has been limited, mainly because a tweet is typically too short to contain an argumentboth a claim and a premise. In this paper, we propose a novel problem formulation to mine arguments from Twitter: We formulate argument mining on Twitter as a text classification task to identify tweets that serve as premises for a hashtag that represents a claim of interest. To demonstrate the efficacy of this formulation, we mine arguments for and against funding Planned Parenthood expressed in tweets. We first present a new dataset of 24,100 tweets containing hashtag #Stand-WithPP or #DefundPP, manually labeled as SUPPORT WITH REASON, SUPPORT WITHOUT REASON, and NO EXPLICIT SUPPORT. We then train classifiers to determine the types of tweets, achieving the best performance of 71% F 1. Our results manifest claim-specific keywords as the most informative features, which in turn reveal prominent arguments for and against funding Planned Parenthood.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Twitter is a popular platform to share opinions and claims, which may be accompanied by the underlying rationale. Such information can be invaluable to policy makers, marketers and social scientists, to name a few. However, the effort to mine arguments on Twitter has been limited, mainly because a tweet is typically too short to contain an argumentboth a claim and a premise. In this paper, we propose a novel problem formulation to mine arguments from Twitter: We formulate argument mining on Twitter as a text classification task to identify tweets that serve as premises for a hashtag that represents a claim of interest. To demonstrate the efficacy of this formulation, we mine arguments for and against funding Planned Parenthood expressed in tweets. We first present a new dataset of 24,100 tweets containing hashtag #Stand-WithPP or #DefundPP, manually labeled as SUPPORT WITH REASON, SUPPORT WITHOUT REASON, and NO EXPLICIT SUPPORT. We then train classifiers to determine the types of tweets, achieving the best performance of 71% F 1. Our results manifest claim-specific keywords as the most informative features, which in turn reveal prominent arguments for and against funding Planned Parenthood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of argument mining is to automatically extract arguments-typically defined as consisting of both a claim and at least one premise supporting the claim-from text in various domains. By analyzing the argumentative structure, we can not only identify claims, but also gain a deeper understanding of the evidence and reasons behind the claims (Rahwan et al., 2009; Mochales and Moens, 2011a; Peldszus and Stede, 2013; Lippi and Torroni, 2015; Budzynska and Villata, 2016; Lawrence and Reed, 2020) . And the domains for argument mining have quickly expanded to include less formally written text on social media (Wyner et al., 2012; Goudas et al., 2014; Park and Cardie, 2014; Morio and Fujita, 2018; Chakrabarty et al., 2019 ).", "cite_spans": [ { "start": 348, "end": 369, "text": "(Rahwan et al., 2009;", "ref_id": "BIBREF32" }, { "start": 370, "end": 396, "text": "Mochales and Moens, 2011a;", "ref_id": "BIBREF22" }, { "start": 397, "end": 422, "text": "Peldszus and Stede, 2013;", "ref_id": "BIBREF28" }, { "start": 423, "end": 447, "text": "Lippi and Torroni, 2015;", "ref_id": "BIBREF17" }, { "start": 448, "end": 476, "text": "Budzynska and Villata, 2016;", "ref_id": "BIBREF6" }, { "start": 477, "end": 501, "text": "Lawrence and Reed, 2020)", "ref_id": "BIBREF16" }, { "start": 616, "end": 636, "text": "(Wyner et al., 2012;", "ref_id": "BIBREF47" }, { "start": 637, "end": 657, "text": "Goudas et al., 2014;", "ref_id": "BIBREF11" }, { "start": 658, "end": 680, "text": "Park and Cardie, 2014;", "ref_id": "BIBREF27" }, { "start": 681, "end": 704, "text": "Morio and Fujita, 2018;", "ref_id": "BIBREF25" }, { "start": 705, "end": 729, "text": "Chakrabarty et al., 2019", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Yet, the effort to mine arguments from Twitter has been limited due to a rather obvious reasontweets are often too short to contain an entire argument, i.e., a claim and a premise (Dusmanu et al., 2017) . For this reason, existing approaches to argument mining on Twitter typically focus on identifying claims, evidence, or either, but not both at the same time (Addawood and Bashir, 2016; Bosc et al., 2016a; Dusmanu et al., 2017; W\u00fchrl and Klinger, 2021) . This is not ideal, since the underlying rationale can be as important as the claim.", "cite_spans": [ { "start": 180, "end": 202, "text": "(Dusmanu et al., 2017)", "ref_id": "BIBREF10" }, { "start": 362, "end": 389, "text": "(Addawood and Bashir, 2016;", "ref_id": "BIBREF0" }, { "start": 390, "end": 409, "text": "Bosc et al., 2016a;", "ref_id": "BIBREF3" }, { "start": 410, "end": 431, "text": "Dusmanu et al., 2017;", "ref_id": "BIBREF10" }, { "start": 432, "end": 456, "text": "W\u00fchrl and Klinger, 2021)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To mine full arguments from Twitter, we propose a novel problem formulation based on an observation-some trendy hashtags serve as placeholders for claims, which may or may not be supported by the contents of the tweets containing them. In the case of the Planned Parenthood debate, the two opposing sides use hashtags #StandWithPP and #DefundPP to specify their respective claims; only a subset of the tweets serve as premises supporting the given claim. For instance, Example 3 in Table 1 can be interpreted as an argument in which the premise \"#AllLivesMatter even the unborn.\" (= all live matter, even the unborn) supports the claim to \"#DefundPP\" (= Planned Parenthood should not be funded by the government.). In contrast, Example 6 cannot be considered a premise for the claim, as it does not provide a reason to \"#Stand-WithPP\"(= Planned Parenthood should continue to be supported by the government). In the case of Example 10, it is not even clear whether or not the user supports the claim #StandWithPP represents. Thus, the tweet cannot be considered a premise for the claim. (While both Examples 6 and 10 are not considered premises, distinguishing the two can be useful for compiling a quantitative summary, e.g. the number of tweets showing support-the former should count as a supporting tweet, unlike the latter.) Henceforth, we call a hashtag representing a claim a claim-hashtag, and a tweet serving as a premise a premise-tweet. From an argument mining perspective, the claim is already known for tweets containing a claim-hashtag, i.e., the claim represented by the claim-hashtag. And such tweets can be easily retrieved using the Twitter API or simple text matching. Thus, the main challenge is in determining whether a given tweet is a premisetweet. In other words, we formulate argument mining on Twitter as a text classification task to identify premise-tweets for claim-hashtags of interest.", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 489, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To demonstrate the efficacy of the proposed formulation, we mine arguments for and against funding Planned Parenthood expressed on Twitter: We first present a new dataset of 24,100 tweets containing hashtag #StandWithPP or #DefundPP, each manually labeled as SUPPORT WITH REA-SON, SUPPORT WITHOUT REASON, or NO EX-PLICIT SUPPORT. We then train several classifiers and test them on 30% of the dataset held-out in advance. We find that fine-tuned BERT performs the best, achieving 71% F 1 . We also show that claim-specific words serve as the most important features for this task, which in turn reveal important arguments for and against funding Planned Parenthood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Why Planned Parenthood? The Planned Parenthood debate is multi-faceted, involving issues like the personhood of fetuses, women's rights, and health services to people of various socioeconomic status. A major benefit of automatically extracting arguments from Twitter is that it provides an easy access to arguments people have made. This is especially helpful for complex topics like Planned Parenthood, where unique but noteworthy arguments can be lost in the midst of others. From a practical perspective, each side of the Planned Parenthood debate has a dominantly used hashtag, allowing us to target two specific hashtags and gain a holistic view of the debate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions are threefold:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a novel problem formulation for mining full arguments-both a claim and a premise-on Twitter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a newly annotated dataset consisting of 24,100 tweets 1 , which is 10 to 80 times bigger than existing datasets for mining arguments from Twitter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We identify prominent arguments for and against funding Planned Parenthood expressed on Twitter by analyzing the most informative features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Argument mining has been used in various domains over the years. These include text written by professionals-such as legal documents (Moens et al., 2007; Wyner et al., 2010; Mochales and Moens, 2011b) and newspaper articles (Reed et al., 2008) -as well as student essays (Stab and Gurevych, 2014; Wachsmuth et al., 2016) and online user comments and reviews (Wyner et al., 2012; Goudas et al., 2014; Park and Cardie, 2014) . In addition, researchers have tackled dialogues (Budzynska et al., 2014) , political debates (Lippi and Torroni, 2016) , clinical trials (Mayer et al., 2018) , peer reviews (Hua et al., 2019) and news blogs (Basile et al., 2016) . While this is a diverse set of domains, they share a common trait that documents are long enough to contain full arguments, often multiple of them in a single document. Thus, argument mining involves identifying argumentative spans of text, determining argumentative units-e.g. premise and claimwithin the arguments, and recognizing the argumentative structure connecting the units. However, tweets are typically too short to contain full arguments, preventing the use of standard argument mining approaches (Dusmanu et al., 2017) . There has been some pioneering work on mining arguments from tweets, as summarized by Schaefer and Stede (2021) . To get around the issue of tweets being too short to contain an entire argument, researchers typically seek to identify argumentative tweets-tweets that contain an argumentative unit, e.g. claim or premise (Bosc et al., 2016a,b; Dusmanu et al., 2017; W\u00fchrl and Klinger, 2021) . For instance, Bosc et al. (2016a,b) distinguish argumentative tweets from non-argumentative ones. For tweets containing a claim, they further distinguish opinion from factual tweets. For tweets containing evidence, they seek to identify the source. Addawood and Bashir (2016); Addawood et al. 2017also identify argumentative tweets, which are further broken down into six different types, such as expert opinion and blog. Schaefer and Stede (2020) present several task formulations, where the closest one to ours is identifying evidence tweets (for a claim expressed in what they call a context tweet or a reply tweet).", "cite_spans": [ { "start": 133, "end": 153, "text": "(Moens et al., 2007;", "ref_id": "BIBREF24" }, { "start": 154, "end": 173, "text": "Wyner et al., 2010;", "ref_id": "BIBREF46" }, { "start": 174, "end": 200, "text": "Mochales and Moens, 2011b)", "ref_id": "BIBREF23" }, { "start": 224, "end": 243, "text": "(Reed et al., 2008)", "ref_id": "BIBREF33" }, { "start": 271, "end": 296, "text": "(Stab and Gurevych, 2014;", "ref_id": "BIBREF38" }, { "start": 297, "end": 320, "text": "Wachsmuth et al., 2016)", "ref_id": "BIBREF42" }, { "start": 358, "end": 378, "text": "(Wyner et al., 2012;", "ref_id": "BIBREF47" }, { "start": 379, "end": 399, "text": "Goudas et al., 2014;", "ref_id": "BIBREF11" }, { "start": 400, "end": 422, "text": "Park and Cardie, 2014)", "ref_id": "BIBREF27" }, { "start": 473, "end": 497, "text": "(Budzynska et al., 2014)", "ref_id": "BIBREF5" }, { "start": 518, "end": 543, "text": "(Lippi and Torroni, 2016)", "ref_id": "BIBREF18" }, { "start": 562, "end": 582, "text": "(Mayer et al., 2018)", "ref_id": "BIBREF21" }, { "start": 598, "end": 616, "text": "(Hua et al., 2019)", "ref_id": "BIBREF14" }, { "start": 632, "end": 653, "text": "(Basile et al., 2016)", "ref_id": "BIBREF2" }, { "start": 1164, "end": 1186, "text": "(Dusmanu et al., 2017)", "ref_id": "BIBREF10" }, { "start": 1275, "end": 1300, "text": "Schaefer and Stede (2021)", "ref_id": "BIBREF36" }, { "start": 1509, "end": 1531, "text": "(Bosc et al., 2016a,b;", "ref_id": null }, { "start": 1532, "end": 1553, "text": "Dusmanu et al., 2017;", "ref_id": "BIBREF10" }, { "start": 1554, "end": 1578, "text": "W\u00fchrl and Klinger, 2021)", "ref_id": "BIBREF45" }, { "start": 1595, "end": 1616, "text": "Bosc et al. (2016a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Argument Mining", "sec_num": "2.1" }, { "text": "Our work, however, specifically targets tweets containing both a claim (in the form of a hashtag) and a premise. This enables the full argument to be reconstructed for each argumentative tweet. In addition, our newly annotated dataset is significantly larger than the datasets used in previous tweet argument mining research, more than 10 to 80 times the size depending on the task (Dusmanu et al., 2017; Schaefer and Stede, 2020) . This will enhance the reliability of the experiment results and analyses.", "cite_spans": [ { "start": 382, "end": 404, "text": "(Dusmanu et al., 2017;", "ref_id": "BIBREF10" }, { "start": 405, "end": 430, "text": "Schaefer and Stede, 2020)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Argument Mining", "sec_num": "2.1" }, { "text": "Planned Parenthood is a non-profit organization that provides reproductive health services in the US and abroad. 2 Whether or not the US government should continue to fund Planned Parenthood has been the subject of ongoing debate, mainly due to the controversial practice of abortion (Halva-Neubauer and Zeigler, 2010; Devi, 2015; Silver and Kapadia, 2017) ; researchers have argued over the legality and subsequent funding for abortion (Primrose, 2012; Wharton et al., 2006) . Supporters of Planned Parenthood have presented several arguments, including that it provides other medical services (Silver and Kapadia, 2017; Stevenson et al., 2016; House and Goldsmith, 1972) . Those against Planned Parenthood also have expressed their position, mostly arguing against the practice of abortion (Halva-Neubauer and Zeigler, 2010; Ziegler, 2012; Devi, 2015) .", "cite_spans": [ { "start": 284, "end": 318, "text": "(Halva-Neubauer and Zeigler, 2010;", "ref_id": "BIBREF12" }, { "start": 319, "end": 330, "text": "Devi, 2015;", "ref_id": "BIBREF9" }, { "start": 331, "end": 356, "text": "Silver and Kapadia, 2017)", "ref_id": "BIBREF37" }, { "start": 437, "end": 453, "text": "(Primrose, 2012;", "ref_id": "BIBREF30" }, { "start": 454, "end": 475, "text": "Wharton et al., 2006)", "ref_id": "BIBREF43" }, { "start": 595, "end": 621, "text": "(Silver and Kapadia, 2017;", "ref_id": "BIBREF37" }, { "start": 622, "end": 645, "text": "Stevenson et al., 2016;", "ref_id": "BIBREF40" }, { "start": 646, "end": 672, "text": "House and Goldsmith, 1972)", "ref_id": "BIBREF13" }, { "start": 792, "end": 826, "text": "(Halva-Neubauer and Zeigler, 2010;", "ref_id": "BIBREF12" }, { "start": 827, "end": 841, "text": "Ziegler, 2012;", "ref_id": "BIBREF48" }, { "start": 842, "end": 853, "text": "Devi, 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Planned Parenthood", "sec_num": "2.2" }, { "text": "The general public has also been voicing their opinions through various social media platforms such as Twitter. While Twitter provides a convenient means to express opinions, gathering such opinions for analysis is not as straight forward. This is unfortunate, as many arguments with compelling reasons and evidence are present in tweets; they are not used to further the discussion surrounding Planned Parenthood in a productive manner. Our work is a step toward addressing this issue by enhancing the efficiency of communication. Tweets containing either of these hashtags were collected over a span of two months. Prior to preprocessing, there were a total of 20,314 and 12,470 tweets containing #StandWithPP and #DefundPP, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Planned Parenthood", "sec_num": "2.2" }, { "text": "As part of the preprocessing, we first removed duplicate and otherwise uninformative tweets that can be easily identified 3 : tweets by the seven most frequently tweeting users (these are mostly autogenerated spams with repetitive content); tweets with a URL and two or more special character (this is a noticeable pattern for tweets in our dataset simply sharing URLs to news sites with random special characters to catch people's attention); tweets with fewer than 4 tokens; and tweets in which @mentions, URLs, or hashtags make up more than 35% of the tokens. The filtering process reduced the number of tweets to 16,870 and 7,230 for #StandWithPP and #DefundPP, respectively. Then, all @-mentions were masked to protect the users' privacy. Any URLs were also masked, as our goal is to recognize premises in the body of tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "The dataset was then annotated using the Amazon Mechanical Turk 4 service. The annotators were asked to classify each tweet as one of the three possible classes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "\u2022 SUPPORT WITH REASON: The user supports the claim represented by the claimhashtag and presents a reason, regardless of the validity and strength.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "\u2022 SUPPORT WITHOUT REASON: The user supports the claim represented by the claimhashtag, but does not provide a reason.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "\u2022 NO EXPLICIT SUPPORT: All other tweets. Typically, the user has a neutral or unclear stance toward the claim represented by the claim-hashtag, such as news tweets. In some cases, the user uses a claim-hashtag to present a counter-argument to people supporting the claim, rather than to show support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "We ran a pilot study in which annotators were asked to annotate tweets for which we had the gold standard labels. Out of 100 annotators who participated, we identified 32 reliable annotators to annotate the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Then, each tweet was annotated by two annotators, where disagreements were resolved by an adjudicator. We observed a reasonable agreement, Krippendorff's \u03b1 of 0.79. A common source of disagreement was incomplete information, e.g. \"8 Unbelievably Heartbreaking Quotes From Women Who Aborted Their Own Babies | [URL]: [URL] #DefundPP.\" Depending on the quotes presented in the URL, this tweet can be for or against Planned Parenthood: What is heartbreaking could be abortion itself or the process of abortion due to the lack of access to adequate health services. (Given the presence of the hashtag #DefundPP, it is likely that the quote, and in turn this tweet, is against abortion and Planned Parenthood. However, the annotators were asked not to assume the presence of a hashtag as a sign of support, as it is not always true.) Table 2 summarizes the resulting dataset. Note that this is after removing obvious spam tweets during preprocessing as described above. Thus, the percentage of NO EXPLICIT SUPPORT is higher in reality. Also, there is a noticeable difference between #StandWithPP and #DefundPP tweets in terms of the class distribution; a significantly smaller portion of the latter are SUPPORT WITH-OUT REASON tweets. We suspect that this is because changing the status quo requires more convincing arguments. Thus, people arguing to defund Planned Parenthood are more likely to support their claim with a reason or evidence.", "cite_spans": [], "ref_spans": [ { "start": 829, "end": 836, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Argument mining consists of several subtasks, such as identifying argumentative spans of text, determining argumentative units-e.g. premise and claim-within the arguments, and recognizing the argumentative structure connecting the units. In this work, however, the claim is easily identifiable, as we assume that it takes the form of a hashtag, i.e., claim-hashtag, that is known in advance. Thus, the core of our approach to mining arguments on Twitter is deciding whether or not a given tweet is a premise-tweet for a given claim-hashtag. To tackle the task, we train fine-tuned BERT, CNN, and XGBoost classifiers as detailed in this section. Note that we are also interested in distinguishing non-premise-tweets that support the claim (SUPPORT WITHOUT REASON) from those that do not (NO EXPLICIT SUPPORT); This is because the sheer number of tweets supporting a claim can be used to generate a statistical summary of people's support for the claim. Thus, we formulate argument mining on Twitter as a classification task with three classes: SUPPORT WITH REASON, SUPPORT WITHOUT REASON, and NO EXPLICIT SUPPORT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Premise-Tweet Identification", "sec_num": "4" }, { "text": "Given the successful use of fine-tuned BERT on various text classification tasks (Croce et al., 2020; Tian et al., 2020) , we fine-tune a pre-trained BERT to premise-tweet classification task using our training set. For the experiments, we fine-tune the 'bert-base-uncased' pre-trained model (Wolf et al., 2020) , which consists of 12 BERT attention layers, 768 hidden nodes, and 12 attention heads, with a total of 110M parameters. Using the BERT Tokenizer (Wolf et al., 2020) , each tweet is represented by token, segment, and position embedding. Lastly, in order to classify tweets, the model is augmented with a fully-connected classification layer with ReLu activation on top of the pooled output from BERT. An AdamW optimizer is used for regularization (Loshchilov and Hutter, 2019) .", "cite_spans": [ { "start": 81, "end": 101, "text": "(Croce et al., 2020;", "ref_id": "BIBREF8" }, { "start": 102, "end": 120, "text": "Tian et al., 2020)", "ref_id": "BIBREF41" }, { "start": 292, "end": 311, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF44" }, { "start": 458, "end": 477, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF44" }, { "start": 759, "end": 788, "text": "(Loshchilov and Hutter, 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuned BERT", "sec_num": "4.1" }, { "text": "In addition to BERT, we also test the efficacy of DistilBERT, which is a much simpler and faster model that can match the performance of BERT in some cases (Sanh et al., 2019) .", "cite_spans": [ { "start": 156, "end": 175, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuned BERT", "sec_num": "4.1" }, { "text": "While BERT's attention mechanism is shown to be effective for capturing both short and long distance relations between words in documents, a simple CNN may suffice given the brevity of tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Neural Network (CNN)", "sec_num": "4.2" }, { "text": "Thus, we also experiment with CNN. Following the framework presented by Kim (2014) , each tweet is represented as an n \u2022 k matrix, where n is the length of the tweet and k is the dimensionality of the word vectors. For word representation, we employ two versions of the GloVe word embedding (Pennington et al., 2014): A 200-d version trained on tweets, since we are working with tweets; and a 300-d version trained on Common Crawl, since a higher-dimensional embedding may be more effective. For both, we limit the size of the vocabulary to a million tokens.", "cite_spans": [ { "start": 72, "end": 82, "text": "Kim (2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Neural Network (CNN)", "sec_num": "4.2" }, { "text": "XGBoost is an extension to Gradient Boosting that has shown to be effective in several classification tasks (Stein et al., 2019; Qi, 2020) . Schaefer and Stede (2020) show that using XGBoost to classify evidence tweets has promising results. Given the similarity of one of their setups to ours, we use as baselines the 3 variations they employed: XG-Boost with UNIGRAMS, XGBoost with UNIGRAMS + BIGRAMS, and XGBoost with BERT word embeddings. The booster we use is a gradient boosting tree, with a standard max depth of 6 for a tree. The algorithm minimizes the multi-class log loss function, and applies a variation of softmax to get the predicted output probabilities.", "cite_spans": [ { "start": 108, "end": 128, "text": "(Stein et al., 2019;", "ref_id": "BIBREF39" }, { "start": 129, "end": 138, "text": "Qi, 2020)", "ref_id": "BIBREF31" }, { "start": 141, "end": 166, "text": "Schaefer and Stede (2020)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "eXtreme Gradient Boosting (XGBoost)", "sec_num": "4.3" }, { "text": "For each claim-hashtag in our dataset-#StandWithPP and #DefundPP-the classifiers were trained and tested on the respective training and held-out test sets (See Table 2 ). For optimizing hyper-parameters, 5-fold cross validation was done on the training set. The dropout rate was p = 0.5 for CNN and p = 0.1 for BERT. The learning rate was lr = 0.001 for CNN and lr = 2e \u2212 5 for BERT. Batch size was b = 50 for CNN and b = 32 for BERT. The number of epochs was 15 for CNN and 4 for BERT. For the XGBoost baselines, we used the same setup from Schaefer and Stede (2020), but the models were trained and tested on our training and test sets.", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "The experiment results are summarized in Table 3 . Fine-tuned BERT outperformed the rest across the board. This is not surprising given the strong stateof-the-art performances of transformer-based models across various NLP tasks. DistilBERT, a much smaller and faster version of BERT, is noticeably worse than BERT, but is comparable to the CNNs. CNN with GloVe-Twitter performs slightly better than CNN with GloVe-Common Crawl; we suspect that the word embedding trained on tweets is more effective, since our dataset is also a collection of tweets. For the XGBoost baselines, using ngrams proved to be more effective than using BERT word embeddings. This is consistent with the results from Schaefer and Stede (2020), though the datasets are different, and thus a direct comparison cannot be made. We suspect that the straight-forward mapping between dimensions and words in ngrams is better suited for XGBoost than multiple dimensions collectively representing a word in a word embedding; this is because XGBoost is a decision tree based approach that learns to weigh each feature (dimension) differently. Figures 1 and 2 are the median SHAP (SHapley Additive exPlanations) (Lundberg and Lee, 2017) values of the top features for fine-tuned BERT, the best performing model. The SHAP value for a feature with respect to a class indicates the level of influence the given feature had in classifying a tweet as the given class. Here, the influence can be either positive or negative, indicated by the sign of the SHAP value. And the bigger the magnitude, the heavier the influence it had on the classification decision. The median for each word is calculated across all occurrences of the word in the test set. Note that words that occur fewer than 10 times in the test set were excluded from the plot. Similar patterns are exhibited for both claimhashtags. For SUPPORT WITH REASON, the words with large absolute SHAP values tend to be keywords for prominent arguments for the given claim. .", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1109, "end": 1124, "text": "Figures 1 and 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results & Analysis", "sec_num": "5.2" }, { "text": "In the case of #StandWithPP, the words \"women\" and \"healthcare\" rank high; they typically appear in tweets that emphasize women's rights or the need for healthcare in general as reasons to support Planned Parenthood (See Examples 1 and 2 in Table 4 ). In the case of #DefundPP, words that emphasize babies and framing abortion as murder rank high (Examples 3 and 4).", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 248, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results & Analysis", "sec_num": "5.2" }, { "text": "For NO EXPLICIT SUPPORT, most of the words with large absolute SHAP values have negative values, meaning the existence of these words was taken as a sign that the given tweet is not NO EX-PLICIT SUPPORT. In other words, lacking strong characteristics of the other classes is the characteristic of NO EXPLICIT SUPPORT(Example 7). This is partially due to our having removed spam tweets with obvious patterns during preprocessing. Otherwise, those patterns may have had positive SHAP values of large magnitudes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Analysis", "sec_num": "5.2" }, { "text": "For SUPPORT WITHOUT REASON, however, the two claim-hashtags exhibit some differences. For #StandWithPP, words that appear in clear statements of support, e.g. \"support\" and \"stands [with Planned Parenthood] ,\" have positive influence on classifying a tweet as SUPPORT WITHOUT REA-SON. The is because such tweets tend not to include a rationale (Example 5). However, similar words for #DefundPP, e.g. \"defund\" and \"stop [funding Planned Parenthood]\", do not have high SHAP values with respect to SUPPORT WITHOUT REASON, as they often appear with additional explanations (Example 4). Other than \"please\" (Example 6), there are not many indicators of SUPPORT WITH-OUT REASON for #DefundPP. There are not many SUPPORT WITHOUT REASON tweets to begin with as shown in Table 2 . Again, we suspect that non-NO EXPLICIT SUPPORT tweets for #DefundPP tend to contain a reason, as they have to be convincing enough to change the status quo.", "cite_spans": [ { "start": 181, "end": 206, "text": "[with Planned Parenthood]", "ref_id": null } ], "ref_spans": [ { "start": 762, "end": 769, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results & Analysis", "sec_num": "5.2" }, { "text": "Note that the informative features are not always helpful. Non-SUPPORT WITH REASON tweets that contain top feature words of SUPPORT WITH REA-SON can be incorrectly classified. For example, the tweet can be a news tweet reporting the state of affairs. Such tweet does not always reveal the stance of the user posting the tweet (Example 8). The tweet can also be part of a conversation where the reason for supporting the claim cannot be determined without knowing the tweet being replied to (Example 9). Figure 3 : Impact of the training set size on the performance. Pre-trained BERT was fine-tuned on randomly subsampled training sets. Training set sizes in increments of 1,000 were tested and averaged over 3 runs.", "cite_spans": [], "ref_spans": [ { "start": 503, "end": 511, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results & Analysis", "sec_num": "5.2" }, { "text": "There are two main limitations of this work that need to be considered in future research. First, labeled training data is required to train a premisetweet classifier for each claim of interest. The most informative features for our classifiers are specific to the claim-hashtag they are trained for; even though #StandWithPP and #DefundPP are on the same topic, the informative features are drastically different. This suggests that a classifier trained for a claim-hashtag likely is not effective for identifying premise-tweets for other claims-hashtags. In fact, this was confirmed through cross-domain testing, i.e., we fined-tuned BERT on the #StandWithPP training set and tested on the #DefundPP test set, and vice versa. There was a significant drop in the F1 score from 71% to 56% in both scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Implications", "sec_num": "6" }, { "text": "To determine how much training data is necessary, we fine-tuned BERT on randomly selected subsets of the training sets. We tested training set sizes in increments of 1k as shown in Figure 3 . The same pattern can be observed for both claimhashtags: There is a drastic improvement in performance after fine-tuning with even a small training set of size 1k; and the performance plateaus after increasing the size to about 3k to 4k. Based on the result, we suggest that a labeled dataset of at least 3k tweets is prepared to train a premise-tweet classifier for a claim-hashtag of your interest.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Limitations and Implications", "sec_num": "6" }, { "text": "Second, our results are based on experiments with tweets containing two specific claim-hashtags. Future work should consider a more diverse set of claim-hashtags. It will not only test the generalizabilty of this approach, but may also reveal informative features that are claim-independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Implications", "sec_num": "6" }, { "text": "Manually compiling a list of diverse claimhashtags can be laborious, however. To alleviate this issue, we have identified a class of claimhashtags that can be automatically recognized. These hashtags represent so called policy propositions, meaning they suggest policies, or courses of action to be taken (Park et al., 2015) . They typically take the form of an imperative-starting with a verb and ending with a noun, e.g. #Stand-WithPP, #DefundPP, #FightFor15, #LegalizeMarijuana, and #BanGuns. Hashtags do not contain spaces, but the CamelCase capitalization can be used for tokenization-a capitalized letter marks the beginning of a new word, unless several capitalized letters appear in succession to denote a proper noun. The repetitive use of hashtags in tweets is helpful in this regard, as it is very likely that at least one variation of a given hashtag is in CamelCase. Thus, to identify a diverse set of claim-hashtags, we suggest the method of identifying trending hashtags representing policy propositions.", "cite_spans": [ { "start": 305, "end": 324, "text": "(Park et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations and Implications", "sec_num": "6" }, { "text": "Twitter is a popular platform to share opinions, which may be accompanied by the underlying rationale. However, the effort to automatically extract arguments from Twitter has been limited, mainly due to tweets typically not containing both a claim and a premise. The brevity renders it difficult to apply argument mining techniques designed for other domains, where claims and premises can be extracted together. In this paper, we proposed a novel problem formulation to mine arguments from Twitter: We formulated argument mining on Twitter as a text classification task to identify tweets serving as premises for hashtags that represent claims. We demonstrated the efficacy of this formulation by mining arguments for and against funding Planned Parenthood expressed on Twitter. We achieved the best performance of 71% F 1 with fine-tuned BERT. We also showed that domain specific words serve as the most important features, which in turn reveal prominent arguments in support of the given claim. In future work, we would like to continue the effort addressing the issues discussed in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Tweet IDs and labels are available at joonsuk.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.plannedparenthood.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These were all NO EXPLICIT SUPPORT tweets, technically, but we removed them from the dataset, as they can be easily identified by pattern matching, without training a classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.mturk.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the University of Richmond and the Thomas F. and Kate Miller Jeffress Memorial Trust, Bank of America, Trustee for their generous support for this project. We also thank Jamison Poland.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "what is your evidence?\" a study of controversial topics on social media", "authors": [ { "first": "Aseel", "middle": [], "last": "Addawood", "suffix": "" }, { "first": "Masooda", "middle": [], "last": "Bashir", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Third Workshop on Argument Mining (ArgMining2016)", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aseel Addawood and Masooda Bashir. 2016. \"what is your evidence?\" a study of controversial topics on social media. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 1-11.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Stance classification of twitter debates: The encryption debate as a use case", "authors": [ { "first": "Aseel", "middle": [], "last": "Addawood", "suffix": "" }, { "first": "Jodi", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Masooda", "middle": [], "last": "Bashir", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 8th International Conference on Social Media & Society", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aseel Addawood, Jodi Schneider, and Masooda Bashir. 2017. Stance classification of twitter debates: The encryption debate as a use case. In Proceedings of the 8th International Conference on Social Media & Society, pages 1-10.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Argument Mining on Italian News Blogs", "authors": [ { "first": "Pierpaolo", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2016, "venue": "Third Italian Conference on Computational Linguistics (CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierpaolo Basile, Valerio Basile, Elena Cabrio, and Serena Villata. 2016. Argument Mining on Italian News Blogs. In Third Italian Conference on Compu- tational Linguistics (CLiC-it 2016) & Fifth Evalua- tion Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016), Naples, Italy.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "DART: a dataset of arguments and their relations on Twitter", "authors": [ { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1258--1263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bosc, Elena Cabrio, and Serena Villata. 2016a. DART: a dataset of arguments and their relations on Twitter. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 1258-1263, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Tweeties squabbling: Positive and negative results in applying argument mining on social media", "authors": [ { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2016, "venue": "COMMA", "volume": "", "issue": "", "pages": "21--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bosc, Elena Cabrio, and Serena Villata. 2016b. Tweeties squabbling: Positive and negative re- sults in applying argument mining on social media. COMMA, 2016:21-32.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Towards argument mining from dialogue", "authors": [ { "first": "Katarzyna", "middle": [], "last": "Budzynska", "suffix": "" }, { "first": "Mathilde", "middle": [], "last": "Janier", "suffix": "" }, { "first": "Juyeon", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2014, "venue": "COMMA", "volume": "", "issue": "", "pages": "185--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katarzyna Budzynska, Mathilde Janier, Juyeon Kang, Chris Reed, Patrick Saint-Dizier, Manfred Stede, and Olena Yaskorska. 2014. Towards argument min- ing from dialogue. In COMMA, pages 185-196.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Argument mining", "authors": [ { "first": "Katarzyna", "middle": [], "last": "Budzynska", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2016, "venue": "IEEE Intell. Informatics Bull", "volume": "17", "issue": "1", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katarzyna Budzynska and Serena Villata. 2016. Ar- gument mining. IEEE Intell. Informatics Bull., 17(1):1-6.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "AMPERSAND: Argument mining for PER-SuAsive oNline discussions", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2933--2943", "other_ids": { "DOI": [ "10.18653/v1/D19-1291" ] }, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: Argument mining for PER- SuAsive oNline discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2933-2943, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Gan-bert: Generative adversarial learning for robust text classification with a bunch of labeled examples", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Castellucci", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2114--2119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2020. Gan-bert: Generative adversarial learn- ing for robust text classification with a bunch of la- beled examples. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 2114-2119.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Anti-abortion groups target funding of planned parenthood", "authors": [ { "first": "Sharmila", "middle": [], "last": "Devi", "suffix": "" } ], "year": 2015, "venue": "The Lancet", "volume": "386", "issue": "9997", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharmila Devi. 2015. Anti-abortion groups tar- get funding of planned parenthood. The Lancet, 386(9997):941.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Argument mining on twitter: Arguments, facts and sources", "authors": [ { "first": "Mihai", "middle": [], "last": "Dusmanu", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2317--2322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Dusmanu, Elena Cabrio, and Serena Villata. 2017. Argument mining on twitter: Arguments, facts and sources. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2317-2322.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Argument extraction from news, blogs, and social media", "authors": [ { "first": "Theodosis", "middle": [], "last": "Goudas", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Louizos", "suffix": "" } ], "year": 2014, "venue": "In Artificial Intelligence: Methods and Applications", "volume": "", "issue": "", "pages": "287--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theodosis Goudas, Christos Louizos, Georgios Petasis, and Vangelis Karkaletsis. 2014. Argument extrac- tion from news, blogs, and social media. In Artifi- cial Intelligence: Methods and Applications, pages 287-299. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Promoting fetal personhood: The rhetorical and legislative strategies of the pro-life movement after planned parenthood v", "authors": [ { "first": "A", "middle": [], "last": "Glen", "suffix": "" }, { "first": "Sara", "middle": [ "L" ], "last": "Halva-Neubauer", "suffix": "" }, { "first": "", "middle": [], "last": "Zeigler", "suffix": "" } ], "year": 2010, "venue": "casey. Feminist Formations", "volume": "", "issue": "", "pages": "101--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glen A Halva-Neubauer and Sara L Zeigler. 2010. Pro- moting fetal personhood: The rhetorical and legisla- tive strategies of the pro-life movement after planned parenthood v. casey. Feminist Formations, pages 101-123.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Planned parenthood services for the young teenager", "authors": [ { "first": "Elizabeth", "middle": [ "A" ], "last": "House", "suffix": "" }, { "first": "Sadja", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 1972, "venue": "Family Planning Perspectives", "volume": "4", "issue": "2", "pages": "27--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizabeth A. House and Sadja Goldsmith. 1972. Planned parenthood services for the young teenager. Family Planning Perspectives, 4(2):27-31.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Argument mining for understanding peer reviews", "authors": [ { "first": "Xinyu", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Mitko", "middle": [], "last": "Nikolov", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Badugu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2131--2137", "other_ids": { "DOI": [ "10.18653/v1/N19-1219" ] }, "num": null, "urls": [], "raw_text": "Xinyu Hua, Mitko Nikolov, Nikhil Badugu, and Lu Wang. 2019. Argument mining for understand- ing peer reviews. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 2131-2137, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Argument mining: A survey", "authors": [ { "first": "John", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" } ], "year": 2020, "venue": "Computational Linguistics", "volume": "45", "issue": "4", "pages": "765--818", "other_ids": { "DOI": [ "10.1162/coli_a_00364" ] }, "num": null, "urls": [], "raw_text": "John Lawrence and Chris Reed. 2020. Argument mining: A survey. Computational Linguistics, 45(4):765-818.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Argument mining: A machine learning perspective", "authors": [ { "first": "Marco", "middle": [], "last": "Lippi", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Torroni", "suffix": "" } ], "year": 2015, "venue": "International Workshop on Theory and Applications of Formal Argumentation", "volume": "", "issue": "", "pages": "163--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lippi and Paolo Torroni. 2015. Argument min- ing: A machine learning perspective. In Interna- tional Workshop on Theory and Applications of For- mal Argumentation, pages 163-176. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Argument mining from speech: Detecting claims in political debates", "authors": [ { "first": "Marco", "middle": [], "last": "Lippi", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Torroni", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lippi and Paolo Torroni. 2016. Argument min- ing from speech: Detecting claims in political de- bates. In Thirtieth AAAI Conference on Artificial In- telligence.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "M", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "4768--4777", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Pro- ceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 4768-4777, Red Hook, NY, USA. Curran As- sociates Inc.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Argument mining on clinical trials", "authors": [ { "first": "Tobias", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lippi", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Torroni", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2018, "venue": "COMMA", "volume": "", "issue": "", "pages": "137--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Mayer, Elena Cabrio, Marco Lippi, Paolo Tor- roni, and Serena Villata. 2018. Argument mining on clinical trials. In COMMA, pages 137-148.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Argumentation mining", "authors": [ { "first": "Raquel", "middle": [], "last": "Mochales", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2011, "venue": "Artificial Intelligence and Law", "volume": "19", "issue": "1", "pages": "1--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raquel Mochales and Marie-Francine Moens. 2011a. Argumentation mining. Artificial Intelligence and Law, 19(1):1-22.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Argumentation mining", "authors": [ { "first": "Raquel", "middle": [], "last": "Mochales", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2011, "venue": "Artif. Intell. Law", "volume": "19", "issue": "1", "pages": "1--22", "other_ids": { "DOI": [ "10.1007/s10506-010-9104-x" ] }, "num": null, "urls": [], "raw_text": "Raquel Mochales and Marie-Francine Moens. 2011b. Argumentation mining. Artif. Intell. Law, 19(1):1- 22.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic detection of arguments in legal texts", "authors": [ { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Boiy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL '07", "volume": "", "issue": "", "pages": "225--230", "other_ids": { "DOI": [ "10.1145/1276318.1276362" ] }, "num": null, "urls": [], "raw_text": "Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th International Conference on Artificial Intelli- gence and Law, ICAIL '07, pages 225-230, New York, NY, USA. ACM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Annotating online civic discussion threads for argument mining", "authors": [ { "first": "G", "middle": [], "last": "Morio", "suffix": "" }, { "first": "K", "middle": [], "last": "Fujita", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)", "volume": "", "issue": "", "pages": "546--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Morio and K. Fujita. 2018. Annotating online civic discussion threads for argument mining. In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages 546-553.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Toward machine-assisted participation in erulemaking: An argumentation model of evaluability", "authors": [ { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Cheryl", "middle": [], "last": "Blake", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 15th International Conference on Artificial Intelligence and Law, ICAIL '15", "volume": "", "issue": "", "pages": "206--210", "other_ids": { "DOI": [ "10.1145/2746090.2746118" ] }, "num": null, "urls": [], "raw_text": "Joonsuk Park, Cheryl Blake, and Claire Cardie. 2015. Toward machine-assisted participation in erulemak- ing: An argumentation model of evaluability. In Proceedings of the 15th International Conference on Artificial Intelligence and Law, ICAIL '15, pages 206-210, New York, NY, USA. ACM.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Identifying appropriate support for propositions in online user comments", "authors": [ { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "29--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joonsuk Park and Claire Cardie. 2014. Identifying appropriate support for propositions in online user comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29-38, Baltimore, Maryland. Association for Computational Linguis- tics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "From argument diagrams to argumentation mining in texts: A survey", "authors": [ { "first": "Andreas", "middle": [], "last": "Peldszus", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2013, "venue": "Int. J. Cogn. Inform. Nat. Intell", "volume": "7", "issue": "1", "pages": "1--31", "other_ids": { "DOI": [ "10.4018/jcini.2013010101" ] }, "num": null, "urls": [], "raw_text": "Andreas Peldszus and Manfred Stede. 2013. From ar- gument diagrams to argumentation mining in texts: A survey. Int. J. Cogn. Inform. Nat. Intell., 7(1):1- 31.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The attack on planned parenthood: A historical analysis", "authors": [ { "first": "Sarah", "middle": [], "last": "Primrose", "suffix": "" } ], "year": 2012, "venue": "UCLA Women's LJ", "volume": "19", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Primrose. 2012. The attack on planned parent- hood: A historical analysis. UCLA Women's LJ, 19:165.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The text classification of theft crime based on tf-idf and xgboost model", "authors": [ { "first": "Zhang", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)", "volume": "", "issue": "", "pages": "1241--1246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang Qi. 2020. The text classification of theft crime based on tf-idf and xgboost model. In 2020 IEEE In- ternational Conference on Artificial Intelligence and Computer Applications (ICAICA), pages 1241-1246. IEEE.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Argumentation in artificial intelligence", "authors": [ { "first": "Guillermo", "middle": [ "R" ], "last": "Iyad Rahwan", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Simari", "suffix": "" }, { "first": "", "middle": [], "last": "Van Benthem", "suffix": "" } ], "year": 2009, "venue": "", "volume": "47", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iyad Rahwan, Guillermo R Simari, and Johan van Ben- them. 2009. Argumentation in artificial intelligence, volume 47. Springer.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Language resources for studying argument", "authors": [ { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Raquel", "middle": [ "Mochales" ], "last": "Palau", "suffix": "" }, { "first": "Glenn", "middle": [], "last": "Rowe", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2008, "venue": "LREC. European Language Resources Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Reed, Raquel Mochales Palau, Glenn Rowe, and Marie-Francine Moens. 2008. Language resources for studying argument. In LREC. European Lan- guage Resources Association.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Annotation and detection of arguments in tweets", "authors": [ { "first": "Robin", "middle": [], "last": "Schaefer", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 7th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "53--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Schaefer and Manfred Stede. 2020. Annotation and detection of arguments in tweets. In Proceed- ings of the 7th Workshop on Argument Mining, pages 53-58.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Argument mining on twitter: A survey", "authors": [ { "first": "Robin", "middle": [], "last": "Schaefer", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2021, "venue": "", "volume": "63", "issue": "", "pages": "45--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Schaefer and Manfred Stede. 2021. Argument mining on twitter: A survey. it-Information Technol- ogy, 63(1):45-58.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Planned parenthood is health care, and health care must defend it: a call to action", "authors": [ { "first": "Diana", "middle": [], "last": "Silver", "suffix": "" }, { "first": "Farzana", "middle": [], "last": "Kapadia", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana Silver and Farzana Kapadia. 2017. Planned par- enthood is health care, and health care must defend it: a call to action.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Annotating argument components and relations in persuasive essays", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014)", "volume": "", "issue": "", "pages": "1501--1510", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Stab and Iryna Gurevych. 2014. Annotat- ing argument components and relations in persua- sive essays. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 1501-1510, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "An analysis of hierarchical text classification using word embeddings", "authors": [ { "first": "Patricia", "middle": [ "A" ], "last": "Roger Alan Stein", "suffix": "" }, { "first": "Joao", "middle": [ "Francisco" ], "last": "Jaques", "suffix": "" }, { "first": "", "middle": [], "last": "Valiati", "suffix": "" } ], "year": 2019, "venue": "Information Sciences", "volume": "471", "issue": "", "pages": "216--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Alan Stein, Patricia A Jaques, and Joao Fran- cisco Valiati. 2019. An analysis of hierarchical text classification using word embeddings. Information Sciences, 471:216-232.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Effect of removal of planned parenthood from the texas women's health program", "authors": [ { "first": "J", "middle": [], "last": "Amanda", "suffix": "" }, { "first": "Imelda", "middle": [ "M" ], "last": "Stevenson", "suffix": "" }, { "first": "Richard", "middle": [ "L" ], "last": "Flores-Vazquez", "suffix": "" }, { "first": "Pete", "middle": [], "last": "Allgeyer", "suffix": "" }, { "first": "Joseph", "middle": [ "E" ], "last": "Schenkkan", "suffix": "" }, { "first": "", "middle": [], "last": "Potter", "suffix": "" } ], "year": 2016, "venue": "New England Journal of Medicine", "volume": "374", "issue": "9", "pages": "853--860", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amanda J Stevenson, Imelda M Flores-Vazquez, Richard L Allgeyer, Pete Schenkkan, and Joseph E Potter. 2016. Effect of removal of planned parent- hood from the texas women's health program. New England Journal of Medicine, 374(9):853-860.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Early detection of rumours on twitter via stance transfer learning", "authors": [ { "first": "Lin", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "575--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin Tian, Xiuzhen Zhang, Yan Wang, and Huan Liu. 2020. Early detection of rumours on twitter via stance transfer learning. In European Conference on Information Retrieval, pages 575-588. Springer.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Using argument mining to assess the argumentation quality of essays", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Al", "middle": [], "last": "Khalid", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Khatib", "suffix": "" }, { "first": "", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1680--1691", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 1680-1691.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Preserving the core of roe: Reflections on planned parenthood v. casey", "authors": [ { "first": "Linda", "middle": [ "J" ], "last": "Wharton", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Frietsche", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Kolbert", "suffix": "" } ], "year": 2006, "venue": "Yale Journal of Law and Feminism", "volume": "18", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda J. Wharton, Susan Frietsche, and Kathryn Kol- bert. 2006. Preserving the core of roe: Reflections on planned parenthood v. casey. Yale Journal of Law and Feminism, 18:2.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Claim detection in biomedical Twitter posts", "authors": [ { "first": "Amelie", "middle": [], "last": "W\u00fchrl", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 20th Workshop on Biomedical Language Processing", "volume": "", "issue": "", "pages": "131--142", "other_ids": { "DOI": [ "10.18653/v1/2021.bionlp-1.15" ] }, "num": null, "urls": [], "raw_text": "Amelie W\u00fchrl and Roman Klinger. 2021. Claim de- tection in biomedical Twitter posts. In Proceed- ings of the 20th Workshop on Biomedical Language Processing, pages 131-142, Online. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Semantic processing of legal texts. chapter Approaches to Text Mining Arguments from Legal Cases", "authors": [ { "first": "Adam", "middle": [], "last": "Wyner", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Mochales-Palau", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" }, { "first": "David", "middle": [], "last": "Milward", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "60--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Wyner, Raquel Mochales-Palau, Marie-Francine Moens, and David Milward. 2010. Semantic pro- cessing of legal texts. chapter Approaches to Text Mining Arguments from Legal Cases, pages 60-79. Springer-Verlag, Berlin, Heidelberg.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Semi-automated argumentative analysis of online product reviews", "authors": [ { "first": "Adam", "middle": [], "last": "Wyner", "suffix": "" }, { "first": "Jodi", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Atkinson", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Bench-Capon", "suffix": "" } ], "year": 2012, "venue": "COMMA", "volume": "245", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Wyner, Jodi Schneider, Katie Atkinson, and Trevor JM Bench-Capon. 2012. Semi-automated argumentative analysis of online product reviews. COMMA, 245:43-50.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Sexing harris: The law and politics of the movement to defund planned parenthood", "authors": [ { "first": "Mary", "middle": [], "last": "Ziegler", "suffix": "" } ], "year": 2012, "venue": "Buff. L. Rev", "volume": "60", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary Ziegler. 2012. Sexing harris: The law and poli- tics of the movement to defund planned parenthood. Buff. L. Rev., 60:701.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "StandWithPP and #DefundPP represent the two opposing sides on the issue of the US federal government funding Planned Parenthood, or of Planned Parenthood itself. The claims represented by the hashtags can be stated as follows: \u2022 #StandWithPP: Planned Parenthood should continue to receive federal funding. \u2022 #DefundPP: Planned Parenthood should not receive federal funding.", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Words that had the biggest influence on the classification decision for BERT fine-tuned on #Stand-WithPP tweets, sorted by the median absolute SHAP value. Positive values are colored blue, and negative, red.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Words that had the biggest influence on the classification decision for BERT fine-tuned on #DefundPP tweets, sorted by the median absolute SHAP value. Positive values are colored blue, and negative, red.", "num": null }, "TABREF0": { "content": "
2-class3-class# Example Tweet
PREMISE 1 I #StandWithPP. NON-SUPPORT WITH REASON 5 #StandWithPP now and forever. SUPPORT 6 I wish everyone shouldn't hate on Planned Parenthood so fucking much #StandWithPP WITHOUT 7 #YouHadMeAt I'll do everything in my power to #DefundPP REASON 8 Tell your Senators to Defund Planned Parenthood SIGN & RT #DefundPP [url]
PREMISENO EXPLICIT SUPPORT9 Legal Troubles Continue for Group Attacking Planned Parenthood #StandWithPP [url] 10 #SextaComMFSDV #StandWithPP Citizen Khan Grow your Twitter followers [url] 11 @user Paid staffers? #defundpp 12 Dont listen to the Daily Bugle . Spider-Man is a force for good .#StandWithPP #PeterParker
", "html": null, "type_str": "table", "text": "Routine healthcare shouldn't exist just for the rich. 2 @user helps men too! #StandWithPP 3 #AllLivesMatter even the unborn. #DefundPP #DefundPlannedParenthood 4 God Has A Plan For Every Life #PraytoEndAbortion #DefundPP #DefendLife #innocent", "num": null }, "TABREF1": { "content": "
: Example Tweets for Each Class (2-class and 3-class setup). The user can show support with reason
(SUPPORT WITH REASON), show support without providing a reason (SUPPORT WITHOUT REASON) or make
their stance unclear through an irrelevant or overall confusing tweet (NO EXPLICIT SUPPORT).
", "html": null, "type_str": "table", "text": "", "num": null }, "TABREF3": { "content": "", "html": null, "type_str": "table", "text": "Distribution of Classes in the Dataset. 30% of the tweets for each hashtag were randomly put in the held-out test set.", "num": null }, "TABREF4": { "content": "
Model#StandWithPP Prec Rec F1#DefundPP Acc Prec Rec F1Acc
Baseline approaches adopted from Schaefer and Stede (2020)
-
", "html": null, "type_str": "table", "text": "XGBoost with UNIGRAMS .682 .676 .669 .676 .665 .682 .667 .682 -XGBoost with UNIGRAMS + BIGRAMS .697 .686 .679 .686 .671 .686 .671 .686 -XGBoost with BERT Word Embedding .542 .543 .528 .543 .534 .549 .532 .549 CNN with GloVe Word Embedding (CommonCrawl) .675 .661 .650 .661 .607 .669 .634 .669 CNN with GloVe Word Embedding (Twitter).", "num": null }, "TABREF5": { "content": "", "html": null, "type_str": "table", "text": "Experiment Results for 3-Class Classification (SUPPORT WITH REASON vs SUPPORT WITHOUT REASON vs NO EXPLICIT SUPPORT). The experiments were conducted independently for #StandWithPP and #DefundPP tweets. Also, each entry in the table is the weighted average of the measures computed with respect to the classes.", "num": null }, "TABREF7": { "content": "
", "html": null, "type_str": "table", "text": "Example tweets and classifications by fine-tuned BERT. Tokens are highlighted in blue if they have positive attribution scores with respect to the predicted class, and red if negative. The darker the color, the higher the absolute value of the score. The class names are abbreviated as follows: SUPPORT WITH REASON (S+R), SUPPORT WITHOUT REASON (S-R), and NO EXPLICIT SUPPORT (NES)", "num": null } } } }