{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:01.118266Z" }, "title": "Aspect-Based Argument Mining", "authors": [ { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ludwig Maximilian University of Munich", "location": { "country": "Germany" } }, "email": "dietrich@trautmann.me" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Computational Argumentation in general and Argument Mining in particular are important research fields. In previous works, many of the challenges to automatically extract and to some degree reason over natural language arguments were addressed. The tools to extract argument units are increasingly available and further open problems can be addressed. In this work, we are presenting the task of Aspect-Based Argument Mining (ABAM), with the essential subtasks of Aspect Term Extraction (ATE) and Nested Segmentation (NS). At the first instance, we create and release an annotated corpus with aspect information on the token-level. We consider aspects as the main point(s) argument units are addressing. This information is important for further downstream tasks such as argument ranking, argument summarization and generation, as well as the search for counter-arguments on the aspect-level. We present several experiments using stateof-the-art supervised architectures and demonstrate their performance for both of the subtasks.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Computational Argumentation in general and Argument Mining in particular are important research fields. In previous works, many of the challenges to automatically extract and to some degree reason over natural language arguments were addressed. The tools to extract argument units are increasingly available and further open problems can be addressed. In this work, we are presenting the task of Aspect-Based Argument Mining (ABAM), with the essential subtasks of Aspect Term Extraction (ATE) and Nested Segmentation (NS). At the first instance, we create and release an annotated corpus with aspect information on the token-level. We consider aspects as the main point(s) argument units are addressing. This information is important for further downstream tasks such as argument ranking, argument summarization and generation, as well as the search for counter-arguments on the aspect-level. We present several experiments using stateof-the-art supervised architectures and demonstrate their performance for both of the subtasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The field of computational argumentation (Slonim et al., 2016) gained a lot of interest in the last couple of years. This is noticeable from both the number of the submitted publications related to this field and also from the high volume of emerging datasets (Aharoni et al., 2014; Levy et al., 2017; Habernal et al., 2018; Stab et al., 2018; Trautmann et al., 2020a) , specific task formulations (Wachsmuth et al., 2017; Al-Khatib et al., 2020) and models (Kuribayashi et al., 2019; Chakrabarty et al., 2019) .", "cite_spans": [ { "start": 41, "end": 62, "text": "(Slonim et al., 2016)", "ref_id": "BIBREF25" }, { "start": 260, "end": 282, "text": "(Aharoni et al., 2014;", "ref_id": "BIBREF0" }, { "start": 283, "end": 301, "text": "Levy et al., 2017;", "ref_id": "BIBREF14" }, { "start": 302, "end": 324, "text": "Habernal et al., 2018;", "ref_id": "BIBREF10" }, { "start": 325, "end": 343, "text": "Stab et al., 2018;", "ref_id": "BIBREF26" }, { "start": 344, "end": 368, "text": "Trautmann et al., 2020a)", "ref_id": null }, { "start": 398, "end": 422, "text": "(Wachsmuth et al., 2017;", "ref_id": "BIBREF33" }, { "start": 423, "end": 446, "text": "Al-Khatib et al., 2020)", "ref_id": "BIBREF2" }, { "start": 458, "end": 484, "text": "(Kuribayashi et al., 2019;", "ref_id": "BIBREF13" }, { "start": 485, "end": 510, "text": "Chakrabarty et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Similar to aspect-based sentiment analysis (Pontiki et al., 2014) , we also see the possibility of breaking down arguments into smaller attributes or meaningful components in the argument mining domain. We consider these components as aspects of the arguments. Previous works already utilized aspectinformation for several subtasks within the argument mining domain (Fujii and Ishikawa, 2006; Misra et al., 2015; Gemechu and Reed, 2019) . However, these works vary significantly in the definition of aspects and do not focus on the aspect-based argument mining explicitly, e.g., employ aspects as a source of side or additional information.", "cite_spans": [ { "start": 43, "end": 65, "text": "(Pontiki et al., 2014)", "ref_id": "BIBREF19" }, { "start": 366, "end": 392, "text": "(Fujii and Ishikawa, 2006;", "ref_id": "BIBREF8" }, { "start": 393, "end": 412, "text": "Misra et al., 2015;", "ref_id": "BIBREF17" }, { "start": 413, "end": 436, "text": "Gemechu and Reed, 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For instance, Fujii and Ishikawa (2006) are mainly focusing on the summarization of opinions, visualizing pro and contra arguments for a given topic. Thereby, the authors are extracting aspects, calling them points at issue, and ranking the arguments according to them. However, their approach relies on rule-based extraction solely. In Misra et al. (2015) , the authors are proposing summarization methods to recognize specific arguments and counter-arguments in social media texts, to further group them across discussions into facets (i.e., aspects) on which that issue is argued. Still, this work is limited to a couple of topics and samples. Finally, Gemechu and Reed (2019) also mention aspects as part of four functional components, where the authors interchangeably label aspects and concepts for the specific words. However, to the best of our knowledge, the authors did not publish their labeled data, making a comparative evaluation of aspect extraction methods impossible. We, in contrast, specifically address the aspect term Supporters say it is an unnecessary regulation designed to force clinics to shut down, while opponents say the prohibition protects women's health.", "cite_spans": [ { "start": 14, "end": 39, "text": "Fujii and Ishikawa (2006)", "ref_id": "BIBREF8" }, { "start": 337, "end": 356, "text": "Misra et al. (2015)", "ref_id": "BIBREF17" }, { "start": 656, "end": 679, "text": "Gemechu and Reed (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Granted, the initial construction costs of a nuclear plant are huge, but the ongoing maintenance and fuel costs have proven to be far lower than that of other energy sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Abortion", "sec_num": null }, { "text": "Figure 1: Example annotation of argumentative spans, the corresponding stances (green: supporting/pro; red: opposing/contra) and the aspects (underlined) for the topics abortion and nuclear energy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "extraction, concentrate on the proper definition of aspects and therefore directly emphasize and present the task of Aspect-Based Argument Mining (ABAM) in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "One of the potential applications for the ABAM is the ability to search for specific subtopics within a larger controversial area. For instance, for the topic abortion, one can particularly be interested in regulation or health-related aspects (first example in Figure 1 ). Whereas for the topic of nuclear energy, one can care for solely enviromental, costor safety-related aspects (second example in Figure 1 ). By searching or filtering for the particular aspects, one has the possibility to select for specific information and, therefore, to get more fine-grained results. Another benefit is the ability to compare opposing arguments on the aspect-level.", "cite_spans": [], "ref_spans": [ { "start": 262, "end": 270, "text": "Figure 1", "ref_id": null }, { "start": 402, "end": 410, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "In this regard, necessary subtasks within the ABAM include the explicit Aspect Term Extraction (ATE) on token-level and the Nested Segmentation (NS) of argumentative parts along with their aspects within a given sentence. Our work is based on Trautmann et al. (2020a) , where the authors already addressed the task of argument unit segmentation. We extend their benchmark with aspect term extraction on these argument units. The ABAM task can be performed in two ways: first, as a two-step pipeline approach with argument unit recognition and classification (AURC) followed by aspect term extraction, or as an end-to-end approach in the form of the nested segmentation task. Since the argument units are already provided by Trautmann et al. (2020a) , we can use them directly for the second step in the pipeline, namely the ATE task. Whereas in the end-to-end scenario we adress both tasks (i.e., AURC and ATE) simultaneously for argumentative sentences.", "cite_spans": [ { "start": 243, "end": 267, "text": "Trautmann et al. (2020a)", "ref_id": null }, { "start": 724, "end": 748, "text": "Trautmann et al. (2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "One of the main challenges we faced during this work was the absence of publicly available benchmarks containing the aspect terms. Existing argument mining datasets do not contain the required information and therefore could not be directly applied for Aspect-Based Argument Mining. We address this challenge by extending an existing fine-grained argument corpus (Trautmann et al., 2020a) with crowdsourced token-level aspect information. This is our focused main contribution. While annotating the corpus, we were faced multiple difficulties, including the proper definition of aspects and the creation of rules required for the aspect extraction. It is important to note, that within this work, we refer to aspects as the main point(s) arguments are addressing.", "cite_spans": [ { "start": 363, "end": 388, "text": "(Trautmann et al., 2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "Last but not least, since we are extending the existing corpus, we do not explicitly concentrate on the stance definition and its annotation. Furthermore, as stated in Trautmann et al. (2020a) , there are two main argument mining directions: closed domain discourse-level and the argument mining from the information seeking perspective. The authors of the underlying corpora follow the latter and provide the reasons for that in their work. We, therefore, adopt their vision on that point.", "cite_spans": [ { "start": 168, "end": 192, "text": "Trautmann et al. (2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "Summarizing the abovementioned points, our contribution within this work is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "\u2022 We are emphasizing and presenting the task of Aspect-Based Argument Mining on its own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "\u2022 We are extending an existing corpus with token-level aspect terms, making a comparative evaluation of ABAM methods possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "\u2022 We are presenting a number of strong baselines with a corresponding error analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic: Nuclear Energy", "sec_num": null }, { "text": "We define the ABAM task as following: Given a list of several topic related texts (documents or paragraphs), we segment the texts into N sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "sentence i = [t 1 , t 2 , t 3 , . . . , t n ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "The problem is to select, if available, one (or several) span(s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "span j = [t k , . . . , t l ]", "eq_num": "(2)" } ], "section": "Problem Statement", "sec_num": "2" }, { "text": "inside each sentence i , with k >= 1, l <= n, l \u2212 k >= SEG min and l \u2212 k <= SEG max (with SEG min = 3 tokens and SEG max = n tokens in a segment), and a corresponding stance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "stance j \u2208 [P RO, CON ]", "eq_num": "(3)" } ], "section": "Problem Statement", "sec_num": "2" }, { "text": "Tokens outside of argumentative spans are assigned the N ON stance label. Furthermore, regularly there is at least one aspect in every selected span with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "aspect j = [t p , . . . , t q ]", "eq_num": "(4)" } ], "section": "Problem Statement", "sec_num": "2" }, { "text": "where p >= k, q <= l, q \u2212 p >= ASP min and q \u2212 p <= ASP max (with ASP min = 1 token and ASP max = 5 tokens per aspect).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "Regarding the abovementioned problem definition ( \u00a72), we selected three research areas as thematically closed to our task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Sentiment Analysis: The SemEval workshop organized the task of aspect-based sentiment analysis (Pontiki et al., 2014; Pontiki et al., 2015; Pontiki et al., 2016) . Its subtasks also involved the aspect term extraction, which mainly inspired our approach and definition of the aspect term. Recent works applied adversarial training of pretrained language models (Karimi et al., 2020 ) and a combination of contextualized embeddings and hierarchical attention (Trusca et al., 2020) for new state-of-the-art results on this tasks.", "cite_spans": [ { "start": 95, "end": 117, "text": "(Pontiki et al., 2014;", "ref_id": "BIBREF19" }, { "start": 118, "end": 139, "text": "Pontiki et al., 2015;", "ref_id": "BIBREF20" }, { "start": 140, "end": 161, "text": "Pontiki et al., 2016)", "ref_id": "BIBREF21" }, { "start": 361, "end": 381, "text": "(Karimi et al., 2020", "ref_id": "BIBREF11" }, { "start": 458, "end": 479, "text": "(Trusca et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Argument Mining: In our work we adopt the definition of argument facets from the previous work and adjust it for our task. For instance, Misra et al. (2015) used the information on argument facets for the summarization of arguments in social media. Furthermore, the authors used argument facets for the argument similarity task (Misra et al., 2016) . The abovementioned works were a first approach in the area of argument facet extraction and were limited to solely a couple of topics and samples. Recent work extended this approach to 28 topics and used the aspect information for the argument similarity task and argument clustering (Reimers et al., 2019) . However, the focus of Reimers et al. (2019) was on the pairwise classification of argumentative sentences and not on the aspect term extraction task itself. Lastly, the work by Bar-Haim et al. (2020) defined argument key-points to create concise summaries from a large set of arguments.", "cite_spans": [ { "start": 137, "end": 156, "text": "Misra et al. (2015)", "ref_id": "BIBREF17" }, { "start": 328, "end": 348, "text": "(Misra et al., 2016)", "ref_id": "BIBREF18" }, { "start": 635, "end": 657, "text": "(Reimers et al., 2019)", "ref_id": "BIBREF23" }, { "start": 682, "end": 703, "text": "Reimers et al. (2019)", "ref_id": "BIBREF23" }, { "start": 837, "end": 859, "text": "Bar-Haim et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Nested Named Entity Recognition: The task of nested-NER is similar to the nested segmentation task ( \u00a75.1.2) that we propose. Early work (Finkel and Manning, 2009) presented newspaper and biomedical corpora, and modeled the data by manual feature extraction. Recent works proposed recurrent neural networks (Katiyar and Cardie, 2018) and sequence-to-sequence (Strakov\u00e1 et al., 2019) approaches. The latter modeled nested labels as multilabels, a method that we also adopted for our task with overlapping stance and aspect labels. ", "cite_spans": [ { "start": 137, "end": 163, "text": "(Finkel and Manning, 2009)", "ref_id": "BIBREF7" }, { "start": 307, "end": 333, "text": "(Katiyar and Cardie, 2018)", "ref_id": "BIBREF12" }, { "start": 359, "end": 382, "text": "(Strakov\u00e1 et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The creation of the ABAM benchmark is based on the argument units from the AURC corpus (Trautmann et al., 2020a) and is divided into two main parts. The first part addresses two studies for the annotation task formulation, whereas the second part describes the final corpus creation. We outsourced the data annotation to independent (crowd-)annotators and based on their results we created the gold labels.", "cite_spans": [ { "start": 87, "end": 112, "text": "(Trautmann et al., 2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Creation", "sec_num": "4" }, { "text": "We conducted two expert studies on random samples of ten argument units per stance and topic, selected from the AURC corpus. The resulting sets contained 160 samples for each study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expert Study", "sec_num": "4.1" }, { "text": "The first expert study task was to select explicit aspect terms from a given argument unit on the tokenlevel. Two graduate domain experts performed the annotation. Experts were free to select every inputtoken which fits the following task description: \"The aspects are defined as the most important point(s) the argument unit is addressing\". After the annotation step, the Inter-Annotator Agreement (IAA) for the 160 samples was computed. We decided for Cohen's \u03ba (Cohen, 1960) as our agreement measure, that resulted in the initial score of 0.538. According to Viera et al. (2005) , this score is in the moderate agreement range. Furthermore, the primary analysis of the selected aspect terms from both annotators yielded a list of especially frequent part-of-speech (PoS) patterns for the selected tokens. To further improve the annotation process, the PoS information was employed in the second expert study.", "cite_spans": [ { "start": 464, "end": 477, "text": "(Cohen, 1960)", "ref_id": "BIBREF5" }, { "start": 562, "end": 581, "text": "Viera et al. (2005)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Token-Level Annotation", "sec_num": "4.1.1" }, { "text": "The aspect candidate selection step is crucial for the correct aspect term extraction task. To select the aspect candidates for the second study, we rely on the part-of-speech information. Specifically, the PoS patterns that occurred more than twice in the previous expert study (i.e., token-level annotation) where picked, and some additional PoS patterns were defined (e.g., the singular and plural form of nouns). The tag set is based on the Part-of-Speech tags used in the Penn Treebank Project 1 and the stanza NLP library 2 . The final PoS pattern list is comprehensive and representative (includes 44 patterns, see Table 1) , and ensures linguistically and grammatically correct candidates, without affecting the actual discourse. These PoS patterns were applied on a different set of 160 random samples to create a list of aspect term candidates for every argument unit. The total count of unique aspects for all topics is 4525, but the sum of all unique aspects per topic is 5485. This is due to some aspects appearing in several topics (c.f. Table 3 ).", "cite_spans": [], "ref_spans": [ { "start": 622, "end": 630, "text": "Table 1)", "ref_id": "TABREF1" }, { "start": 1052, "end": 1059, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Candidates Selection", "sec_num": "4.1.2" }, { "text": "The annotators were asked to solve the same task as before, but now by selecting one or several options from the aspect term candidates list. If none of the aspect term candidates were appropriate, the option NONE was selected. This simplification of the task, compared to the first study, led to a raised Cohen's \u03ba of 0.790. This is considered as a substantial agreement (Viera et al., 2005) and we deem this as a viable approach for the aspect term extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidates Selection", "sec_num": "4.1.2" }, { "text": "Based on the insights from the first two studies, the annotation guidelines ( \u00a7A) were extended with clearer task formulations and examples. Additionally, the final set of PoS patterns (full list in Table 1 ) was applied on all argument units from the AURC corpus. The AURC corpus was slightly preprocessed to account for duplicates on the sentence-and segment-level, as well as on some minor errors on span boundaries.", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpus Annotation", "sec_num": "4.2" }, { "text": "Two independent (crowd-)annotators with a linguistic background and a minimum professional working proficiency in English were recruited for the aspect term extraction task. The annotation procedure was the same as described in \u00a74.1.2. The inter-annotator agreement score for the two expert annotators resulted in a Cohen's \u03ba of 0.874 for all eight (8) topics. This is considered as an almost perfect agreement (Viera et al., 2005) .", "cite_spans": [ { "start": 411, "end": 431, "text": "(Viera et al., 2005)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Annotation", "sec_num": "4.2" }, { "text": "Annotation Merge For the gold standard we selected the annotations where both of the annotators agreed on the token-level. This ensured that we always had a selection of aspects if neither of the annotators selected the NONE option. Additionally, shorter aspect terms are favoured by this annotation merge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Annotation", "sec_num": "4.2" }, { "text": "The final descriptive statistics of the ABAM corpus are depicted in the Table 2 . There are 12040 aspects in total and 4525 unique (lemmatized) aspects. The topic with the most segments (T8 in Table 2 ), also yielded the most total aspects (2019). Furthermore, there are 58.10% of the aspects with only one token, 32.12% with 2 tokens, 7.94% with 3 tokens, 1.73% with 4 tokens and only 0.12% with 5 tokens.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 193, "end": 200, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Gold Standard", "sec_num": null }, { "text": "Common Aspects In further aspect analysis we aggregated the most common aspects for the eight topics. The top five aspects and the absolute occurence counts per topic, are shown in Table 3 . Furthermore, three aspects (life, problem, government) appeared in all eight topics and the aspects people, cost, society, risk, law appeared in seven topics.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Gold Standard", "sec_num": null }, { "text": "This section presents our experimental setup regarding the two tasks, the employed models and the data set splits. : The top 5 most common aspects per topic and for aspects that appear in several topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "In this work we apply the two different, but related, sub-tasks for ABAM in the sequence labeling formulation, following Akhundov et al. (2018) .", "cite_spans": [ { "start": 121, "end": 143, "text": "Akhundov et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "5.1" }, { "text": "In the first task (ATE), we employ only the aspect term information within the segments (argument untis). This sequence labeling task is a binary classification problem per token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aspect Term Extraction", "sec_num": "5.1.1" }, { "text": "In the second task (NS), we utilize full argumentative sentences (like the examples in Figure 1 ) with the stance (PRO, CON, NON) and aspect (O, ASP) information for every token as our input. We extend the stance labels with the aspect information for a total set of five possible combinations ([NON,O] , [PRO,O] , [PRO,ASP] , [CON,O] , [CON,ASP] ). 3 This is a multiclass sequence labeling problem, which solves both the argument unit segmentation and the aspect term extraction tasks.", "cite_spans": [ { "start": 294, "end": 302, "text": "([NON,O]", "ref_id": null }, { "start": 305, "end": 312, "text": "[PRO,O]", "ref_id": null }, { "start": 315, "end": 324, "text": "[PRO,ASP]", "ref_id": null }, { "start": 327, "end": 334, "text": "[CON,O]", "ref_id": null }, { "start": 337, "end": 346, "text": "[CON,ASP]", "ref_id": null } ], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Nested Segmentation", "sec_num": "5.1.2" }, { "text": "BERT For the two subtasks, we decided for the BERT model (Devlin et al., 2019) as a recent state-ofthe-art system on a number of natural language processing tasks. We utilize the base and large versions of BERT, as well as both versions of the models with an additional CRF-Layer (Sutton et al., 2012) as the final classification layer in the architecture. Further information about hyperparameter search and computing infrastructure are in \u00a76.2, \u00a7B and \u00a7C.", "cite_spans": [ { "start": 57, "end": 78, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 280, "end": 301, "text": "(Sutton et al., 2012)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.2" }, { "text": "PoS Patterns Additionally, we applied the PoS-patterns from the aspect candidates creation step we used in \u00a74. For the ATE task we labeled all tokens that match the PoS-patterns and report the results as the lower boundary of our approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.2" }, { "text": "As the evaluation metric, we report the macro-F1 scores 4 for both of our tasks. Further information about accuracy, precision and recall can be found in \u00a7D. Table 5 : Sample counts per set and domain for the nested segmentation task.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "For a better understanding of the model performance, we followed the two different dataset splits (domains) as they were defined for the AURC corpus (Trautmann et al., 2020a) . In the inner-topic split we trained, evaluated and tested our models on the same set of topics (T1-T6, Table 2 ). In the cross-topic split we trained our model on T1-T5, selected the best hyperparameter from the evaluation on T6 and tested on T7 and T8. Detailed sample counts are shown in Table 4 and Table 5 for each task, domain and set.", "cite_spans": [ { "start": 149, "end": 174, "text": "(Trautmann et al., 2020a)", "ref_id": null } ], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 467, "end": 474, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 479, "end": 486, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Inner-Topic & Cross-Topic", "sec_num": "5.4" }, { "text": "This section presents the results for our tasks as described in \u00a75.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "The best performing options are the BERT LARGE models (Table 6 ). Both of them perform similar, but the one with the CRF-layer is slightly better on the development set for inner-topic and the test set for the cross-topic. The inner-topic scores are higher compared to the more challenging cross-topic set-up, were we evaluate the models on unseen topics. All the models performed much better than the lower boundary from the PoS-Patterns Matches. However, this scores are still bellow the human performance of 0.895. The human performance on this task is based on the results from the second expert study ( \u00a74.1.2)", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 62, "text": "(Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Aspect Term Extraction", "sec_num": "6.1.1" }, { "text": "The results for NS (Table 7) , show that the BERT LARGE model outperforms the other listed approaches, except for the development set in the inner-topic set-up. Furthermore, the cross-topic set-up is also more challenging for this task, compared to the inner-topic setting. Table 7 : F1 results on the dev and test sets for the inner-topic (INNER) and cross-topic (CROSS) set-ups for the nested segmentation task.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 28, "text": "(Table 7)", "ref_id": null }, { "start": 274, "end": 281, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Nested Segmentation", "sec_num": "6.1.2" }, { "text": "For our experimental setup with BERT, we fine-tuned the whole (standard) base and large models, as well as both models with an additional final CRF-Layer. We selected the hyperparameters on the development sets and in particular the learning rate (range: 0.00001 -0.00009 in 0.00001 steps) and the dropout rate (range: 0 -0.5 in 0.1 steps). We used grid search, to cover all possible combinations. The model parameters were optimized with AdamW (Loshchilov and Hutter, 2018) . The training batch size was 32.", "cite_spans": [ { "start": 445, "end": 474, "text": "(Loshchilov and Hutter, 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters", "sec_num": "6.2" }, { "text": "Our reported results are the averages from three runs and one epoch took about 1 minute for the base models and less than 2 minutes for the large models on average. We fine-tuned for 10 epochs in the ATE task and for 20 epochs in the NS task. Detailed numbers of the final hyperparameters for each model and task can be found in the tables in the appendix \u00a7B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters", "sec_num": "6.2" }, { "text": "Recalling our definition of aspects: They are defined as the main point(s) argument units are addressing. Furthermore, considering our annotation guidelines in \u00a7A, the most important point is usually not equal to the given main topic. An overview of the main errors found during the evaluation of the development sets for the best performing models in the inner-and cross-topic set-ups, is given below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Aspect Term Extraction During the evaluation of ATE results, we observed a number of errors, which we grouped into the following categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "\u2022 The models tend to favour NOUNS in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "\u2022 Topic words, such as abortion or marijuana legalization, are often selected as aspects, which is in conflict with our guidelines. \u2022 Phrase constructions like thread of ... are often selected as a whole aspect by the models. For the benchmark, we, in contrast, focus on the main representative word of such constructions (e.g., suicide vs. thread of suicide). \u2022 In the case of ADJECTIVE+NOUN, we suggest to avoid general adjectives (e.g. new in new treatments), whereas focused adjectives that are part of the concept should be selected (e.g. recreational in recreational marijuana). Our observation is, that models in general could not sufficiently differentiate between such adjectives. \u2022 Models lack the understanding of domain-specific phrasems like in vitro fertilisation or life without parole and tend to select only the nominalized part of them (e.g., fertilisation, parole). Overall the inner-topic set-up achieved much better performace compared to the cross-topic set-up and both models showed significantly better results over the PoS-Patterns Matches baseline. However, in the cross-topic set-up we faced more repeated errors, such as the tendency to select topic words as aspects and not sufficient understanding of domain-specific phrasems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Nested Segmentation The typology of the main errors in the NS task is similar to the ATE task. Additionally, in the NS task, a number of errors occured due to the wrong assigment of the stance labels, especially in the cross-topic set-up. These results confirm the insight from Trautmann et al. (2020a) , where most of the errors arose due to the wrong stance classification. Apparently, the BERT-based models tend to attach to sentiment words for the stance predictions, which is not always correlated.", "cite_spans": [ { "start": 278, "end": 302, "text": "Trautmann et al. (2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "ABAM is a challenging task that, to the best of our knowledge, was not directly addressed before. We made two important contributions: First, we created and released a publicly available benchmark for Aspect-Based Argument Mining. Second, we showcased several baselines for the two subtasks, namely the Aspect Term Extraction and the Nested Segmentation, and performed an elaborative error analysis. We believe that these findings as well as the benchmark are of high potential for further downstream tasks, such as argument ranking, argument summarization and the search for counter-arguments on the aspect-level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "For the future work, we foresee the investigation of unsupervised approaches for the Aspect Term Extraction task, since they showed promising results within the Aspect-Based Sentiment Analysis domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Furthermore, it would be of high interest to incorporate topic-specific knowledge (e.g., understanding of phrasems) into the models to address the discussed error types. In another line of work, one could also explore distant supervision (Rakhmetullina et al., 2018) or domain adaptation methods (M\u00e4rz et al., 2019) , as well as relational approaches (Trautmann et al., 2020b) Table 9 : Hyperparameters (learning rate) for the NS task.", "cite_spans": [ { "start": 238, "end": 266, "text": "(Rakhmetullina et al., 2018)", "ref_id": "BIBREF22" }, { "start": 296, "end": 315, "text": "(M\u00e4rz et al., 2019)", "ref_id": "BIBREF16" }, { "start": 351, "end": 376, "text": "(Trautmann et al., 2020b)", "ref_id": null } ], "ref_spans": [ { "start": 377, "end": 384, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We used Kaggle's Kernels 5 for the processing of the data and Google's Colab 6 for the training (finetuning) of our models. The former service offers a single 12GB NVIDIA Tesla K80 GPU, while the latter a single 16GB NVIDIA Tesla P100 GPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Compute Resources", "sec_num": null }, { "text": "The additionally reported numbers for accuracy, precision and recall can be found in the Table 10 for the ATE task, in the Table 11 for the NS task. The numbers are the average from three runs. Table 10 : Accuracy (acc.), precision (pre.) and recall (rec.) results on the dev and test sets for the inner-topic (INNER) and cross-topic (CROSS) set-ups for the aspect term extraction task. These are the average scores from three runs. Table 11 : Accuracy (acc.), precision (pre.) and recall (rec.) results on the dev and test sets for the innertopic (INNER) and cross-topic (CROSS) set-ups for the nested segmentation task (args). These are the average scores from three runs.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Table 10", "ref_id": "TABREF1" }, { "start": 123, "end": 131, "text": "Table 11", "ref_id": "TABREF1" }, { "start": 194, "end": 202, "text": "Table 10", "ref_id": "TABREF1" }, { "start": 433, "end": 441, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "D Additional Results", "sec_num": null }, { "text": "https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html 2 https://stanfordnlp.github.io/stanza/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Tokens that are not part of argument units (spans) get the stance-label NON in this sequence labeling task and aspects are always within argumentative spans.4 https://github.com/chakki-works/seqeval", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.kaggle.com/kernels 6 https://colab.research.google.com/signup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Annotation guidelines defined for the Aspect Term Extraction task in Aspect-Based Argument Mining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Annotation Guidelines", "sec_num": null }, { "text": "\u2022 Given a main topic and an argumentative segment (unit), please select one or several options from the aspect candidates list.\u2022 If no aspect candidate could be selected from the list, pick the option None.While selecting the aspects, please consider the following rules:\u2022 An aspect is defined as the most important/relevant point for the argument made.\u2022 The most important point is usually not equal to the given main topic.\u2022 In case of doubt, shorter aspects candidates (generic terms; e.g. \"life span\") are prefered over longer candidates (e.g. \"prolonged life span\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": null }, { "text": "\u2022 The selected aspect(s) should be related to the topic in general.\u2022 The presence of AND/OR (usually) denote multiple aspects:-If a sentence contains multiple phrases (e.g., \"abortion causes breast cancer AND it kills unborn children.\"); -If there is an enumeration and objects connected by AND/OR (e.g. \"abortion causes breast cancer, infertility and pain.\");\u2022 In the case of ADJECTIVE+NOUN, general adjectives should be avoided (e.g. \"new\" in \"new treatments\"), whereas focused adjectives that are part of the concept should be selected (e.g. \"recreational\" in \"recreational marijuana\").\u2022 Please, use these test-questions for yourself while annotating:-Do you want this argument to be shown to someone, if they select this aspect(s) of the topic, or are other aspect terms in this argument more relevant for the point made? -Which words make you understand the argument most? -Which words are the most relevant and mainly form the meaning of the argument made? -If you would compress the argument into a few most relevant words, which words would that be?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Hints", "sec_num": null }, { "text": "The dropout rate of 0.1 was always the best option. The learning rates for the different models are displayed in Table 8 for the ATE task and in Table 9 for the NS task.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 8", "ref_id": null }, { "start": 145, "end": 152, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "B Hyperparameters", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics", "authors": [ { "first": "Ehud", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Anatoly", "middle": [], "last": "Polnarov", "suffix": "" }, { "first": "Tamar", "middle": [], "last": "Lavee", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Gutfreund", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "64--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfreund, and Noam Slonim. 2014. A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In Proceedings of the First Workshop on Argumentation Mining, pages 64-68, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sequence labeling: A practical approach", "authors": [ { "first": "Adnan", "middle": [], "last": "Akhundov", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Groh", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.03926" ] }, "num": null, "urls": [], "raw_text": "Adnan Akhundov, Dietrich Trautmann, and Georg Groh. 2018. Sequence labeling: A practical approach. arXiv preprint arXiv:1808.03926.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "End-to-end argumentation knowledge graph construction", "authors": [ { "first": "Khalid", "middle": [], "last": "Al-Khatib", "suffix": "" }, { "first": "Yufang", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Jochim", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Bonin", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "7367--7374", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khalid Al-Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, and Benno Stein. 2020. End-to-end argumentation knowledge graph construction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7367-7374.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "From arguments to key points: Towards automatic argument summarization", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Lilach", "middle": [], "last": "Eden", "suffix": "" }, { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Kantor", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Lahav", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.01619" ] }, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kantor, Dan Lahav, and Noam Slonim. 2020. From arguments to key points: Towards automatic argument summarization. arXiv preprint arXiv:2005.01619.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "AM-PERSAND: Argument mining for PERSuAsive oNline discussions", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2933--2943", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AM- PERSAND: Argument mining for PERSuAsive oNline discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2933-2943, Hong Kong, China, November. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and psychological measurement", "volume": "20", "issue": "1", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Nested named entity recognition", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 141-150, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A system for summarizing and visualizing arguments in subjective documents: Toward supporting decision making", "authors": [ { "first": "Atsushi", "middle": [], "last": "Fujii", "suffix": "" }, { "first": "Tetsuya", "middle": [], "last": "Ishikawa", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Sentiment and Subjectivity in Text", "volume": "", "issue": "", "pages": "15--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atsushi Fujii and Tetsuya Ishikawa. 2006. A system for summarizing and visualizing arguments in subjective documents: Toward supporting decision making. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, pages 15-22.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Decompositional argument mining: A general purpose approach for argument graph construction", "authors": [ { "first": "Debela", "middle": [], "last": "Gemechu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "516--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debela Gemechu and Chris Reed. 2019. Decompositional argument mining: A general purpose approach for argument graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 516-526. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SemEval-2018 task 12: The argument reasoning comprehension task", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "763--772", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. SemEval-2018 task 12: The argument reasoning comprehension task. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 763-772, New Orleans, Louisiana, June. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adversarial training for aspect-based sentiment analysis with bert", "authors": [ { "first": "Akbar", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Rossi", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Prati", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Full", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.11316" ] }, "num": null, "urls": [], "raw_text": "Akbar Karimi, Leonardo Rossi, Andrea Prati, and Katharina Full. 2020. Adversarial training for aspect-based sentiment analysis with bert. arXiv preprint arXiv:2001.11316.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Nested named entity recognition revisited", "authors": [ { "first": "Arzoo", "middle": [], "last": "Katiyar", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "861--871", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 861-871, New Orleans, Louisiana, June. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An empirical study of span representations in argumentation structure parsing", "authors": [ { "first": "Tatsuki", "middle": [], "last": "Kuribayashi", "suffix": "" }, { "first": "Hiroki", "middle": [], "last": "Ouchi", "suffix": "" }, { "first": "Naoya", "middle": [], "last": "Inoue", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Reisert", "suffix": "" }, { "first": "Toshinori", "middle": [], "last": "Miyoshi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4691--4698", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, and Kentaro Inui. 2019. An empirical study of span representations in argumentation structure parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4691-4698, Florence, Italy, July. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised corpus-wide claim detection", "authors": [ { "first": "Ran", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Gretz", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Sznajder", "suffix": "" }, { "first": "Shay", "middle": [], "last": "Hummel", "suffix": "" }, { "first": "Ranit", "middle": [], "last": "Aharonov", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ran Levy, Shai Gretz, Benjamin Sznajder, Shay Hummel, Ranit Aharonov, and Noam Slonim. 2017. Unsuper- vised corpus-wide claim detection. In Proceedings of the 4th Workshop on Argument Mining, pages 79-84, Copenhagen, Denmark, September. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Domain adaptation for part-of-speech tagging of noisy user-generated text", "authors": [ { "first": "Luisa", "middle": [], "last": "M\u00e4rz", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3415--3420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa M\u00e4rz, Dietrich Trautmann, and Benjamin Roth. 2019. Domain adaptation for part-of-speech tagging of noisy user-generated text. In Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3415-3420.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using summarization to discover argument facets in online idealogical dialog", "authors": [ { "first": "Amita", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Jean", "middle": [ "E" ], "last": "Fox Tree", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "430--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amita Misra, Pranav Anand, Jean E. Fox Tree, and Marilyn Walker. 2015. Using summarization to discover argument facets in online idealogical dialog. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 430-440, Denver, Colorado, May-June. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Measuring the similarity of sentential arguments in dialogue", "authors": [ { "first": "Amita", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Ecker", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "276--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amita Misra, Brian Ecker, and Marilyn Walker. 2016. Measuring the similarity of sentential arguments in dia- logue. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 276-287, Los Angeles, September. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "SemEval-2014 task 4: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "John", "middle": [], "last": "Pavlopoulos", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "27--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Man- andhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland, August. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semeval-2015 task 12: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th international workshop on semantic evaluation", "volume": "", "issue": "", "pages": "486--495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486-495.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semeval-2016 task 5: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Haris", "middle": [], "last": "Papageorgiou", "suffix": "" }, { "first": "Ion", "middle": [], "last": "Androutsopoulos", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Al-Smadi", "suffix": "" }, { "first": "Mahmoud", "middle": [], "last": "Al-Ayyoub", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Orph\u00e9e", "middle": [], "last": "De Clercq", "suffix": "" } ], "year": 2016, "venue": "10th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al- Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In 10th International Workshop on Semantic Evaluation (SemEval 2016).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distant supervision for emotion classification task using emoji2emotion", "authors": [ { "first": "Aisulu", "middle": [], "last": "Rakhmetullina", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Groh", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 1st International Workshop on Emoji Understanding and Applications in Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aisulu Rakhmetullina, Dietrich Trautmann, and Georg Groh. 2018. Distant supervision for emotion classification task using emoji2emotion. In Proceedings of the 1st International Workshop on Emoji Understanding and Applications in Social Media (Emoji2018). Stanford, CA, USA. http://ceurws. org, volume 2130.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Classification and clustering of arguments with contextualized word embeddings", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Tilman", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. In Proceedings of the 57th", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "567--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 567-578, Florence, Italy, July. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Nlp approaches to computational argumentation", "authors": [ { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Slonim, Iryna Gurevych, Chris Reed, and Benno Stein. 2016. Nlp approaches to computational argumen- tation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Cross-topic argument mining from heterogeneous sources", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Rai", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3664--3674", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3664-3674. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Neural architectures for nested NER through linearization", "authors": [ { "first": "Jana", "middle": [], "last": "Strakov\u00e1", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5326--5331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331, Florence, Italy, July. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An introduction to conditional random fields. Foundations and Trends R in Machine Learning", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2012, "venue": "", "volume": "4", "issue": "", "pages": "267--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. Foundations and Trends R in Machine Learning, 4(4):267-373.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Hinrich Sch\u00fctze, and Iryna Gurevych. 2020a. Finegrained argument unit recognition and classification", "authors": [ { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" } ], "year": null, "venue": "The Thirty-Fourth AAAI Conf. on Artificial Intelligence", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dietrich Trautmann, Johannes Daxenberger, Christian Stab, Hinrich Sch\u00fctze, and Iryna Gurevych. 2020a. Fine- grained argument unit recognition and classification. In The Thirty-Fourth AAAI Conf. on Artificial Intelligence, New York City, NY, USA, AAAI 2020. AAAI Press, 2.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Thomas Seidl, and Hinrich Sch\u00fctze. 2020b. Relational and fine-grained argument mining", "authors": [ { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Fromm", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dietrich Trautmann, Michael Fromm, Volker Tresp, Thomas Seidl, and Hinrich Sch\u00fctze. 2020b. Relational and fine-grained argument mining. Datenbank-Spektrum.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Flavius Frasincar, and Rommert Dekker. 2020. A hybrid approach for aspect-based sentiment analysis using deep contextual word embeddings and hierarchical attention", "authors": [ { "first": "Maria", "middle": [], "last": "Mihaela Trusca", "suffix": "" }, { "first": "Daan", "middle": [], "last": "Wassenberg", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.08673" ] }, "num": null, "urls": [], "raw_text": "Maria Mihaela Trusca, Daan Wassenberg, Flavius Frasincar, and Rommert Dekker. 2020. A hybrid approach for aspect-based sentiment analysis using deep contextual word embeddings and hierarchical attention. arXiv preprint arXiv:2004.08673.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Understanding interobserver agreement: the kappa statistic", "authors": [ { "first": "J", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "Joanne", "middle": [ "M" ], "last": "Viera", "suffix": "" }, { "first": "", "middle": [], "last": "Garrett", "suffix": "" } ], "year": 2005, "venue": "Fam med", "volume": "37", "issue": "5", "pages": "360--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony J Viera, Joanne M Garrett, et al. 2005. Understanding interobserver agreement: the kappa statistic. Fam med, 37(5):360-363.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "PageRank\" for argument relevance", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Yamen", "middle": [], "last": "Ajjour", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1117--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Benno Stein, and Yamen Ajjour. 2017. \"PageRank\" for argument relevance. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1117-1127, Valencia, Spain, April. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "text": "The final set of the 44 Part-of-Speech patterns.", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF3": { "text": "Count of sentences, segments and (total & unique) aspects in the ABAM corpus.", "type_str": "table", "num": null, "html": null, "content": "
" }, "TABREF5": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
" }, "TABREF7": { "text": "Sample counts per set and domain for the aspect term extraction task.", "type_str": "table", "num": null, "html": null, "content": "
set \\domainINNERCROSS
train22682097
dev307478
test6361185
" }, "TABREF9": { "text": "", "type_str": "table", "num": null, "html": null, "content": "
domainINNERCROSS
model \\setdevtestdevtest
BERT BASE.507.465.278.338
BERT BASE +CRF.521.480.270.332
BERT LARGE.557.520.315.369
BERT LARGE +CRF.563.517.293.358
: F1 results on the dev and test sets
for the inner-topic (INNER) and cross-topic
(CROSS) set-ups for the aspect term extraction
task.
" }, "TABREF10": { "text": "for this task.", "type_str": "table", "num": null, "html": null, "content": "
domainINNERCROSS
BERT BASE6e \u2212 58e \u2212 5
BERT BASE +CRF9e \u2212 59e \u2212 5
BERT LARGE9e \u2212 59e \u2212 5
BERT LARGE +CRF9e \u2212 58e \u2212 5
" }, "TABREF11": { "text": "Hyperparameters (learning rate) for the ATE task.", "type_str": "table", "num": null, "html": null, "content": "
domainINNERCROSS
BERT BASE7e \u2212 55e \u2212 5
BERT BASE +CRF8e \u2212 56e \u2212 5
BERT LARGE5e \u2212 57e \u2212 5
BERT LARGE +CRF7e \u2212 58e \u2212 5
" } } } }