ACL-OCL / Base_JSON /prefixA /json /argmining /2021.argmining-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
172 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:09.057056Z"
},
"title": "Explainable Unsupervised Argument Similarity Rating with Abstract Meaning Representation and Conclusion Generation",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {}
},
"email": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Heinisch",
"suffix": "",
"affiliation": {
"laboratory": "CITEC",
"institution": "Bielefeld University",
"location": {}
},
"email": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Wiesenbach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {}
},
"email": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": "",
"affiliation": {
"laboratory": "CITEC",
"institution": "Bielefeld University",
"location": {}
},
"email": "cimiano@techfak.uni-bielefeld.de"
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings. Hence, the features that determine argument similarity remain elusive. We address this issue by introducing novel argument similarity metrics that aim at high performance and explainability. We show that Abstract Meaning Representation (AMR) graphs can be useful for representing arguments, and that novel AMR graph metrics can offer explanations for argument similarity ratings. We start from the hypothesis that similar premises often lead to similar conclusionsand extend an approach for AMR-based argument similarity rating by estimating, in addition, the similarity of conclusions that we automatically infer from the arguments used as premises. We show that AMR similarity metrics make argument similarity judgements more interpretable and may even support argument quality judgements. Our approach provides significant performance improvements over strong baselines in a fully unsupervised setting. Finally, we make first steps to address the problem of reference-less evaluation of argumentative conclusion generations.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings. Hence, the features that determine argument similarity remain elusive. We address this issue by introducing novel argument similarity metrics that aim at high performance and explainability. We show that Abstract Meaning Representation (AMR) graphs can be useful for representing arguments, and that novel AMR graph metrics can offer explanations for argument similarity ratings. We start from the hypothesis that similar premises often lead to similar conclusionsand extend an approach for AMR-based argument similarity rating by estimating, in addition, the similarity of conclusions that we automatically infer from the arguments used as premises. We show that AMR similarity metrics make argument similarity judgements more interpretable and may even support argument quality judgements. Our approach provides significant performance improvements over strong baselines in a fully unsupervised setting. Finally, we make first steps to address the problem of reference-less evaluation of argumentative conclusion generations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Rating the similarity of arguments (Reimers et al., 2019) is a core task in argument mining and argument search (Maturana, 1988; Wachsmuth et al., 2017; Ajjour et al., 2019) . Argument similarity ratings are also needed for (case-based) argument retrieval (Rissland et al., 1993; Chesnevar and Maguitman, 2004) , data exploration via argument clustering, and even automated debaters (Slonim et al., 2021) : to counter an opponent's argument, one may retrieve an argument similar to theirs, but of opposite stance to the topic (Wachsmuth et al., 2018) .",
"cite_spans": [
{
"start": 35,
"end": 57,
"text": "(Reimers et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 112,
"end": 128,
"text": "(Maturana, 1988;",
"ref_id": "BIBREF26"
},
{
"start": 129,
"end": 152,
"text": "Wachsmuth et al., 2017;",
"ref_id": "BIBREF49"
},
{
"start": 153,
"end": 173,
"text": "Ajjour et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 256,
"end": 279,
"text": "(Rissland et al., 1993;",
"ref_id": "BIBREF41"
},
{
"start": 280,
"end": 310,
"text": "Chesnevar and Maguitman, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 383,
"end": 404,
"text": "(Slonim et al., 2021)",
"ref_id": "BIBREF44"
},
{
"start": 526,
"end": 550,
"text": "(Wachsmuth et al., 2018)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typically, argument similarity ratings are computed over 'bag-of-word' argument representations, or else over argument representations inferred with language models such as BERT (Devlin et al., 2019) or InferSent (Conneau et al., 2017) . Two key advantages of such approaches are due to their unsupervised setup: First, unsupervised methods do not rely on human annotations, which are expensive and can be subject to noise and biases. Second, it has been shown for previous supervised methods that they have learned less about argumentation tasks than had been assumed, by exploiting spurious clues and artifacts from manually created data Niven and Kao, 2019) . This has led to a recent interest in solving argumentation tasks in an unsupervised manner, e.g., by logical reasoning (Jo et al., 2021) .",
"cite_spans": [
{
"start": 178,
"end": 199,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 213,
"end": 235,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 640,
"end": 660,
"text": "Niven and Kao, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 782,
"end": 799,
"text": "(Jo et al., 2021)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we will highlight that previous methods for rating argument similarity suffer from a common flaw: beyond shallow statistics (word matches in bag-of-word models, or word similarities in distributional space), they do not provide any rationale for their predictions, and the prediction process is in general not transparent. Therefore, we know only little about the following question:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Which argument features correlate with human argument similarity decisions?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we undertake a first attempt at answering this question, by testing two hypotheses: i) Representing arguments with Abstract Meaning Representations (AMRs) and using AMR graph metrics improves argument similarity rating and provides explanatory information. ii) Extending arguments with inferred conclusions can improve argument similarity rating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following \u00a72 we discuss related work. \u00a73 introduces our two key hypotheses, and \u00a74 presents our argument similarity rating model and its implementation. In \u00a75 we compare our model against strong baselines from prior work. In \u00a76 we conduct several analyses to show how our approach can contribute to a better understanding of arguments, their conclusions and argument similarity ratings: we i) assess predictors of human argument similarity ratings to investigate the criteria that correlate with human ratings of argument similarity; ii) discuss potential advantages of using AMR for graph-based argumentation tasks in a concrete example, and iii) investigate how interpretable argument similarity computation can help assess the quality and usefulness of conclusions drawn from arguments in a reference-less conclusion evaluation setup. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Argument similarity and search Assessing argument similarity is a key task in argument mining (Reimers et al., 2019; Lenz et al., 2019) and can enhance argument search (Maturana, 1988; Rissland et al., 1993; Wachsmuth et al., 2017; Ajjour et al., 2019; Chesnevar and Maguitman, 2004 ). Yet, while delivering solid performance on benchmarks, current methods fail to provide any deeper rationale for their predictions. It is thus not clear whether and to what extent spurious clues or other artifacts may influence the similarity decision Niven and Kao, 2019) . In this paper, we aim at alleviating these issues by i) representing arguments with Abstract Meaning Representation (Banarescu et al., 2013) and conducting similarity assessment using well-defined graph metrics that provide explanatory AMR structure alignments; and ii) by investigating to what extent argument similarity can be projected to inferred conclusions.",
"cite_spans": [
{
"start": 94,
"end": 116,
"text": "(Reimers et al., 2019;",
"ref_id": "BIBREF40"
},
{
"start": 117,
"end": 135,
"text": "Lenz et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 168,
"end": 184,
"text": "(Maturana, 1988;",
"ref_id": "BIBREF26"
},
{
"start": 185,
"end": 207,
"text": "Rissland et al., 1993;",
"ref_id": "BIBREF41"
},
{
"start": 208,
"end": 231,
"text": "Wachsmuth et al., 2017;",
"ref_id": "BIBREF49"
},
{
"start": 232,
"end": 252,
"text": "Ajjour et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 253,
"end": 282,
"text": "Chesnevar and Maguitman, 2004",
"ref_id": "BIBREF9"
},
{
"start": 537,
"end": 557,
"text": "Niven and Kao, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 676,
"end": 700,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Explanations in argumentation Until recently, the quest for explanations in argumentation was mainly focused on theory development. The Toulmin model, e.g., offers a theory of what is needed to make an argument complete (Toulmin, 2003) . Argumentation schemes, which develop taxonomies of argument types and argumentation fallacies (Walton, 2005; Walton et al., 2008) can be viewed as mechanisms for explaining functions, strengths and weaknesses of arguments. Other research aims at studying the computational and formal aspects of argumentation, e.g. abstract argumentation (Dung, 1995) and Bayesian argumentation (Zenker, 2013) . Research in empirical argument mining led researchers to investigate practical methods for explanations (Lawrence, 2021; Becker et al., 2021; Gunning et al., 2019; Rago et al., 2021; Vassiliades et al., 2021) . While most approaches focus on the analysis of linguistic aspects (Lauscher et al., 2021) , e.g., by extracting selected features (Aker et al., 2017; Lugini and Litman, 2018) or leveraging discourse knowledge in language models (Opitz, 2019) , others exploit large background knowledge graphs (Kobbe et al., 2019; Paul et al., 2020; Yuan et al., 2021) such as ConceptNet (Liu and Singh, 2004; Speer et al., 2017) or DBpedia (Mendes et al., 2012) . An advantage of our approach is the explicit graph alignment between two arguments' meaning graphs that better marks related structures, and that can help explain argument similarity judgements.",
"cite_spans": [
{
"start": 220,
"end": 235,
"text": "(Toulmin, 2003)",
"ref_id": "BIBREF47"
},
{
"start": 332,
"end": 346,
"text": "(Walton, 2005;",
"ref_id": "BIBREF52"
},
{
"start": 347,
"end": 367,
"text": "Walton et al., 2008)",
"ref_id": "BIBREF53"
},
{
"start": 576,
"end": 588,
"text": "(Dung, 1995)",
"ref_id": "BIBREF13"
},
{
"start": 616,
"end": 630,
"text": "(Zenker, 2013)",
"ref_id": "BIBREF55"
},
{
"start": 737,
"end": 753,
"text": "(Lawrence, 2021;",
"ref_id": "BIBREF21"
},
{
"start": 754,
"end": 774,
"text": "Becker et al., 2021;",
"ref_id": "BIBREF6"
},
{
"start": 775,
"end": 796,
"text": "Gunning et al., 2019;",
"ref_id": null
},
{
"start": 797,
"end": 815,
"text": "Rago et al., 2021;",
"ref_id": "BIBREF39"
},
{
"start": 816,
"end": 841,
"text": "Vassiliades et al., 2021)",
"ref_id": "BIBREF48"
},
{
"start": 910,
"end": 933,
"text": "(Lauscher et al., 2021)",
"ref_id": "BIBREF20"
},
{
"start": 974,
"end": 993,
"text": "(Aker et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 994,
"end": 1018,
"text": "Lugini and Litman, 2018)",
"ref_id": "BIBREF25"
},
{
"start": 1072,
"end": 1085,
"text": "(Opitz, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 1137,
"end": 1157,
"text": "(Kobbe et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 1158,
"end": 1176,
"text": "Paul et al., 2020;",
"ref_id": "BIBREF35"
},
{
"start": 1177,
"end": 1195,
"text": "Yuan et al., 2021)",
"ref_id": "BIBREF54"
},
{
"start": 1215,
"end": 1236,
"text": "(Liu and Singh, 2004;",
"ref_id": "BIBREF24"
},
{
"start": 1237,
"end": 1256,
"text": "Speer et al., 2017)",
"ref_id": "BIBREF45"
},
{
"start": 1268,
"end": 1289,
"text": "(Mendes et al., 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Argument mining with graphs There is growing interest in extracting graph structures from natural language arguments. Lenz et al. (2020) , e.g., propose a pipeline for detecting and linking argumentative discourse units (ADUs). Al-Khatib et al. (2020) detect textual phrases and link them with POS/NEG relations, where POS indicates a positive influence and NEG a negative influence (inhibition), e.g., sports NEG health issues. However, such approaches lack finer semantic assessment: they do not distinguish word senses, and the linked entities (phrases or ADUs) are taken as atoms, which hampers explainability: when linking sports and health issues with a NEG relation, we cannot differentiate sports NEG issues and sports NEG health (only the former is correct). We target a finer analysis of argumentative texts, by representing them with dense AMR graphs. Additionally, by aligning graph representations of several arguments, our work paves the way for improved argument knowledge graph construction, aided by, or based on, AMR.",
"cite_spans": [
{
"start": 118,
"end": 136,
"text": "Lenz et al. (2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The task of conclusion generation has been recently investigated by Alshomary et al. (2020 Alshomary et al. ( , 2021 , and allows us to infer conclusions from given premises. Conclusion generation can be seen as the inverse of argument generation (Sato et al., 2015; Schiller et al., 2020) . In this work, we show that by considering conclusions inferred from pairs of arguments, we can improve our argument similarity ratings.",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "Alshomary et al. (2020",
"ref_id": "BIBREF4"
},
{
"start": 91,
"end": 116,
"text": "Alshomary et al. ( , 2021",
"ref_id": "BIBREF3"
},
{
"start": 247,
"end": 266,
"text": "(Sato et al., 2015;",
"ref_id": "BIBREF42"
},
{
"start": 267,
"end": 289,
"text": "Schiller et al., 2020)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of argumentative conclusions",
"sec_num": null
},
{
"text": "We base our models for explanatory argument similarity assessment on two hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Hypothesis I: Abstract Meaning Representation (Banarescu et al., 2013) of arguments supports explainable argument similarity assessment AMRs are directed, rooted and acyclic graphs that aim at capturing a sentence's meaning in a machine-readable format. Edges are labeled with semantic relation types (e.g., negation, cause, etc.) and vertices denote either variables or concepts (variables are instances of concepts and allow us to capture coreferences) Hence, the AMR formalism captures various semantic phenomena that can play a role when assessing argument similarity. E.g., besides the obviously useful aspect of negation, AMR captures semantic roles and predicate senses (Kingsbury and Palmer, 2002) . While it is clear that similar arguments tend to involve similar predicates and predicate senses, semantic structure and role assignment may also play a role. For instance, the claims: consumption of alcohol leads to depression vs. depression leads to consumption of alcohol are clearly distinct, while sharing the same concepts. Other AMR facets may also be useful. E.g., AMR captures coreferences and resolving them in different ways can induce significant meaning differences, Finally, AMR includes key semantic relations (location, cause, possession, etc.) that are often implicit or underspecified in language, hence their explicit representation in AMR provides a rich basis for assessing arguments.",
"cite_spans": [
{
"start": 46,
"end": 70,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 677,
"end": 705,
"text": "(Kingsbury and Palmer, 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Arguments represented with AMR can be compared with AMR graph metrics Damonte et al., 2017; that also induce an explicit alignment between two argument graphs.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Damonte et al., 2017;",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Hypothesis II: similar arguments lead to similar conclusions We hypothesize that a key feature of similar arguments is that they invite for similar conclusions. Analogously, dissimilar arguments tend to lead to differing conclusions. Consider the following two arguments: i) Cannabis can have negative effects on brain development of teens. ii) Smoking cannabis is harmful for the lungs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "The arguments are dissimilar, even though they share the same (negative) stance and argue from a similar perspective (health). This dissimilarity is also reflected in the conclusions that can be inferred from them: from i) we can infer that, i.a., Cannabis consumption should be strictly controlled for age or Cannabis can have a negative impact on the brain-while from ii) we could infer that Cannabis, if consumed, should not be smoked or Cannabis smokers should get their lungs checked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "As a complementary example, the similarity of two arguments may be reinforced by the similarity of their inferred conclusions, as shown below: i) Fracking can contaminate water and water wells and suck towns dry. ii) As a water-poor state, fracking and its toxic wastewater presents a serious danger to our communities and ecosystems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "Arguments i) and ii) are rated as similar, presumably because they point at detrimental ramifications of fracking related to water issues. This similarity is likely to be reflected in conclusions drawn from them, such as: i) Fracking can lead to water issues or ii) Fracking poses dangers for water-poor states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "3"
},
{
"text": "According Hyp I, we represent arguments with AMR graphs and rate their similarity with AMR metrics. To test Hyp II we infer conclusions from arguments with language models and compute similarity on arguments extended with their conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Similarity via AMR Metrics",
"sec_num": "4"
},
{
"text": "We propose three model variants that aim at explaining argument similarity. Given two arguments a, a and their extrapolated conclusions c = conclusion(a), c = conclusion(a ), we compute similarity in the space of abstract meaning representation using a similarity function f in three alternative ways: i) f (a, a ), between the two arguments, ii) f (c, c ) between their conclusions, iii) f (a \u2295 c, a \u2295 c ), i.e., between the combinations of argument a and its derived conclusion c, where we use a simple decomposable weighting:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "f (a\u2295c, a \u2295c ) = \u03bbf (a, a )+(1\u2212\u03bb)f (c, c ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "If not specified otherwise, \u03bb is set to 0.95. 2 The AMR metric f will be described in the following.",
"cite_spans": [
{
"start": 46,
"end": 47,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "AMR parser We parse all arguments from the data with the parser from amrlib 3 , a fine-tuned T5 sequence-to-sequence model that achieves high scores on AMR benchmarks. AMR metric We use S 2 MATCH , which is based on the AMR graph matching metric SMATCH , but admits graded concept similarity by matching concept nodes with GloVe embeddings (Pennington et al., 2014) and cosine similarity 4 . To find an optimal graph mapping, exactly like SMATCH, it leverages a hill-climber to approximate the NPhard problem of aligning AMR graphs. Following the alignment step, the (soft) matching of propositions (triples) are scored with an F1 score. Since, so-far, little is known about the trade-off and interface between concrete and abstract semantics in human mental representations (Mkrtychian et al., 2019) , we introduce two more variants that assess similarity from complementary perspectives: S2MATCH Concept and S2MATCH Struct . The first metric variant focuses on conceptual overlap (Fig. 1, middle) , i.e. the more concrete semantic aspects, by putting a triple weight on concept matches. The second variant focuses on structural matches (Fig. 1, bottom) , i.e., the more abstract semantic aspects, by putting triple weight on relation matches.",
"cite_spans": [
{
"start": 340,
"end": 365,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 775,
"end": 800,
"text": "(Mkrtychian et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 982,
"end": 999,
"text": "(Fig. 1, middle)",
"ref_id": "FIGREF0"
},
{
"start": 1139,
"end": 1156,
"text": "(Fig. 1, bottom)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.2"
},
{
"text": "We generate conclusions from arguments using the T5 model (Raffel et al., 2020) pre-trained on summarization tasks. To encourage the model to generate informative conclusions (as opposed to summaries), we further finetune it on premise-conclusion samples from Stab and Gurevych (2017) , which contain intelligible 5 Argument Similarity Prediction with AMR Metrics: Experiments",
"cite_spans": [
{
"start": 58,
"end": 79,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 260,
"end": 284,
"text": "Stab and Gurevych (2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion generator",
"sec_num": null
},
{
"text": "Data set and evaluation metric We use the UKP aspect corpus (Reimers et al., 2019) , which contains 3,596 argument pairs on 28 topics that have been assigned a four-way similarity rating: highly similar (HS), somewhat similar (SS), not similar (NS), different topic/'can't decide' (DTORCD). Following Reimers et al. (2019), we frame the task as a binary prediction problem: highly similar (HS, SS) and non-similar (NS, DTORCD), and we conduct evaluation via cross validation with 4 folds. In every iteration, 7 topics serve as testing data, while the other 21 topics serve to tune a decision threshold of the metric score. 6 As in Reimers et al. 2019, we evaluate the F1 score for each of the two labels and the arithmetic F1 mean (macro F1).",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Reimers et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 623,
"end": 624,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "Baselines We compare to previously established unsupervised baselines (Reimers et al., 2019) : i) Tfidf calculates cosine similarity between Tfidf-weighted bag-of-word vectors; i) InferSent-(FastText|Glove) leverages sentence embeddings produced by the InferSent model (Conneau et al., 2017) based on either FastText (Bojanowski et al., 2016) or GloVe (Pennington et al., 2014) vectors, which are compared with cosine similarity; iii) (GloVe|ELMo|BERT) Embedding uses averaged GloVe embeddings or averaged contextualized embeddings from ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) language models.",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "(Reimers et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 269,
"end": 291,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 317,
"end": 342,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 352,
"end": 377,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 542,
"end": 563,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 573,
"end": 594,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "Best system Table 1 shows our main results. The AMR-based approach that is based on conceptfocused S 2 MATCH scores, taking both the argument and its inferred conclusion into account, obtains rank 1 (68.70 macro F1) and outperforms all baselines, including the BERT baseline. The difference is significant with p < 0.005 (Student t-test). This system is closely followed by other AMR-based systems, e.g., using concept-focused S 2 MATCH that sees only the argument (68.17 macro F1), and standard S 2 MATCH taking both argument and conclusion into account (66.21 macro F1).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Does incorporating conclusions help? Interestingly, assessing only conclusions (rank 13/14/15) outperforms the random baseline (rank 16). The low performance, in general, is expected, since clearly, argument similarity must be primarily determined based on the arguments, and hence, methods that rate the similarity of arguments only via a conclusion proxy have an obvious disadvantage. Hence, the more interesting question is: Do inferred conclusions provide complementary information for the task? Our results show a tendency that this is the case. All AMR-based models that take both conclusion and argument into account (model type f (a \u2295 c, a \u2295 c )) outperform models that only see the arguments (AMR: +0.77; AMR concept-focus: +0.63; AMR struct-focus +0.40). At this point, however, we cannot explain whether this is due to useful reformulations or truly novel content that was generated, or a mix of both. We will investigate this question deeper in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Argument similarity: driven by abstract or concrete semantics? The strong performance of the concept-focused AMR metric shows that a large overlap in concepts tends to correlate with human ratings more than an overlap in abstract semantic structure. The structure-focused AMR methods (last block in Table 1 ), while significantly outperforming the random baseline, lag behind all other baselines. Note, however, that the standard AMRbased model, which weights concept and structure overlap equally, provides strong performance, oc- ",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Pearson's \u03c1 predictor f (a, a) f (c, c) f (ac, a c ) Concepts 0.492 \u2021 0.299 \u2021 0.492 \u2021 Sem. Role Labels (SRL) 0.400 \u2021 0.185 \u2021 0.402 \u2021 Predicate Frames 0.355 \u2021 0.232 \u2021 0.357 \u2021 Reentrancies (Coref.) 0.235 \u2021 0.085 \u2021 0.235 \u2021 Named Entity (NER) 0.076 \u2021 0.052 \u2021 0.077 \u2021 Negations 0.042 \u2020 -0.011 0.042 \u2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "The previous experiment suggests that human argument similarity ratings can be modeled through a combination of different meaning facets, with a focus on concepts. We will now investigate how human argument similarity ratings correlate with specific meaning aspects represented in AMR graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine predictors of argument similarity",
"sec_num": "6.1"
},
{
"text": "Setup For this we leverage fine-grained AMR metrics (Damonte et al., 2017) and compute semantic similarity with respect to 6 meaning aspects i) named entities (NER); ii) negation; iii) lexical concepts; iv) predicate frames; v) coreference and vi) semantic roles (SRL). Instead of merging the labels somewhat similar and similar, we keep them distinct and use a three-point Likert scale: 0 means not similar or unrelated, 0.5 means somewhat similar, and 1 means highly similar. To assess the correlation, we use Pearson's correlation coefficient.",
"cite_spans": [
{
"start": 52,
"end": 74,
"text": "(Damonte et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine predictors of argument similarity",
"sec_num": "6.1"
},
{
"text": "Results of this univariate feature analysis are displayed in Table 2 . As expected from the earlier experiment, shared concepts are strong predictors for argument similarity (Concepts, \u03c1=0.49). Also more abstract semantic features, such as similar semantic roles, have a solid signalling effect (SRL, \u03c1=0.40). Similarly, coreferences have predictive capacity, though at a lower range (\u03c1=0.23). On the other hand, negation or shared named entities do exhibit only small (yet still significant) predictive capacity (Negation, \u03c1=0.04 and NER, \u03c1=0.08).",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Fine predictors of argument similarity",
"sec_num": "6.1"
},
{
"text": "The low correlation of NE overlap with human similarity ratings can in part be explained by the fact that we do not find many arguments where this could potentially matter (in our data, only 1 to 2 out of 1,000 nodes represent person NEs). However, if humans were to rate argument similarity in a dataset that features many arguments from expert opinion (Godden and Walton, 2006; Wagemans, 2011) , named entity overlap may have a significant predictive capacity. Also negation might be more important than what we see in this analysis, since it can be expressed in alternative ways (e.g., through antonyms) that are not encoded as such in AMR.",
"cite_spans": [
{
"start": 366,
"end": 379,
"text": "Walton, 2006;",
"ref_id": "BIBREF14"
},
{
"start": 380,
"end": 395,
"text": "Wagemans, 2011)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine predictors of argument similarity",
"sec_num": "6.1"
},
{
"text": "To illustrate the potential of using AMR for connecting and assessing arguments, we study an example case in Fig. 2 . It shows the graphs and graph alignments 8 that were found, for the actual arguments and their automatically induced conclusions, for our running example on fracking.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 115,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example case with alignment",
"sec_num": "6.2"
},
{
"text": "Observations about argument alignment The top figure shows the alignment of the two argument graphs, where important substructures have been linked. Contamination of water and water wells is linked to endangering of our communities and ecosystems (orange nodes and alignment). It is also appropriate that towns that are sucked dry is linked to water poor state (blue). This link is very valuable since these statements stand in a semantic EXACERBATE-relation that may be important for the arguments' similarity (the water-poverty of states is exacerbated if towns are sucked dry). Ideally, we would like such alignments to be labeled with a corresponding semantic relation. In future work, we plan to achieve this by leveraging commonsense knowledge graphs like ConceptNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example case with alignment",
"sec_num": "6.2"
},
{
"text": "Observations about conclusion alignment The bottom figure shows the alignment of the automatically deduced conclusions. For the left argument, the conclusion fails to produce an abstraction and more or less repeats the argument. For the argument on the right-hand side, however, the conclusion generator produced a more informative conclusion. From the input argument it concludes that Fracking and its toxic wastewater are a threat to the environmentfocusing on the negative environmental impact of fracking. This triggers a graph alignment which adds valuable new information 8 The alignments were computed with S2M Concept+Concl.",
"cite_spans": [
{
"start": 578,
"end": 579,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example case with alignment",
"sec_num": "6.2"
},
{
"text": "(see clouds with dotted margins). The alignment makes explicit that water wells and toxic wastewater stand in a correspondence in the context of fracking. Specifically, we see how the contamination of wells (left graphs) happens: wells are polluted with toxic wastewater (right graphs). Additionally, the left graph helps explain parts of the meaning of the right graph: Fracking and toxic wastewater are a threat because fracking contaminates water and water wells.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example case with alignment",
"sec_num": "6.2"
},
{
"text": "An inferred conclusion can be more or less abstract or dissimilar from the input argument. This raises the question of the quality of an inferred conclusion. In fact, we can apply our AMR similarity metrics to quantify the similarity of an argument and its inferred conclusion-formally: f (a, c)-which may be indicative of the novelty of a conclusion in relation to its premise. Hence, we investigate how AMR similarity metrics can be used to measure the novelty of a conclusion relative to its premise. Another aspect of conclusion quality is its validity or justification, i.e., to what extent it can be trusted. Clearly, a conclusion that is very similar to the premise has a high chance of being valid (as long as the premise is), whereas this is uncertain for parts of its meaning that do not match the premise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigations of conclusion quality",
"sec_num": "6.3"
},
{
"text": "In current research, not much is known about how to rate the quality of a conclusion drawn from an argument. We explore this question by performing a manual assessment of different quality aspects of conclusions, and investigate to what extent these can be assessed with our AMR similarty metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigations of conclusion quality",
"sec_num": "6.3"
},
{
"text": "We randomly sample 100 argument-conclusion pairs per topic. The pairs are given to two annotators whom we ask to assign binary ratings regarding two questions: i) Is the conclusion justified based on the premise? With this we aim to assess whether the argument legitimizes the conclusion; and ii) Does the conclusion introduce some novelty relative to the argument? This should be denied if, e.g., the conclusion repeats the premise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigations of conclusion quality",
"sec_num": "6.3"
},
{
"text": "As shown in Fig. 3 , we measure moderate IAA, with slightly higher agreement for novelty. The results show that T5 often manages to produce either valid (justification, \u224865-75% of cases) or novel content (novelty, \u2248 50-60%), but struggles to produce conclusions that fulfill both criteria (justification & novetly: \u2248 25-35% of cases). Fracking contaminates water and water wells and sucks towns dry.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Investigations of conclusion quality",
"sec_num": "6.3"
},
{
"text": "Fracking and its toxic wastewater are a threat to the environment. 2) Fracking can contaminate water and water wells and suck towns dry..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigations of conclusion quality",
"sec_num": "6.3"
},
{
"text": "As a water-poor state, fracking and its toxic wastewater presents a serious danger to our communities and ecosystems.. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigations of conclusion quality",
"sec_num": "6.3"
},
{
"text": "We now extend the use of our metrics to assess conclusion quality by computing the similarity of argument and conclusion: f (a, c). 9 We calculate six graph similarity statistics of their AMRs to finally produce an aggregate score assessment: i) |a \u2229 c|/|a| measures the relative amount of premise content that is contained in the conclusion ('precision'); ii) |a \u2229 c|/|c| measures the relative amount of conclusion content contained in the premise ('recall'); iii) the harmonic mean of i) and ii) corre-sponds to main metric f (a, c); and features iv-vi) apply a non-linear function to i)-iii), measuring the proximity to the feature means 10 , which expresses the idea that a conclusion that is both novel and justified may be situated at mean similarity of premise and conclusion, measured by f (a, c).",
"cite_spans": [
{
"start": 132,
"end": 133,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Can we predict conclusion quality?",
"sec_num": "6.4"
},
{
"text": "We use a Linear SVM for predicting, in three binary classification tasks, either justification; novelty or both, using the feature set i)-vi). 11 Results are seen in Table 3 . Despite the small training data, performance is good for predicting justified (max. 68.6 F1) or novel (max. 70.0 F1). But predicting a conclusion to be novel & justified yields substantially lower performance (max. 58.3 F1), while still above baseline. Feature correlations show that novel is negatively (-) associated with f (a, c) (iiii), while justified is positively (+) correlated with f (a, c) (i-iii). We find much weaker correlation for novel&justified, tending to mean similarity (iv-vi). Our analyses support Hyp1 in that AMR metrics are able to rate similarity of arguments, of conclusions and of argument-conclusion pairs, and this also allows us to determine if a conclusion is novel or justified. While many justified conclusions are highly similar to the premise, deciding their justification is difficult if they involve novelty. We argue this is because justification cannot be determined from premises alone, but requires external knowledge. We leave this issue for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Can we predict conclusion quality?",
"sec_num": "6.4"
},
{
"text": "Finally, we revisit our Hyp2, that by extending arguments with inferred conclusions, we can support assessment of argument similarity. This raises the issue of the usefulness of a conclusion, in terms of achieving good performance and interpretability of an argument similarity method. The aspect of the usefulness of a conclusion clearly differs from the question of its quality. For one, it is possible that a good conclusion is not useful for argument similarity rating, simply because the assessment of the paired argument premises already provides a confident and precise similarity judgement. On the other hand, a mediocre conclusion could provide complementary indications that can support the similarity judgement. In this final section we aim to assess factors that can determine this usefulness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "Operationalizing conclusion usefulness We define a score U for the usefulness of a conclusion, based on a human rating y, the conclusion similarity f (c, c ) and argument similarity f (a, a ), as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "U = 1 1+(y\u2212f (c,c )) 2 + (y \u2212 f (a, a )) 2 ,",
"eq_num": "(2)"
}
],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "where U is maximized iff the automatic similarity rating of the conclusions does not differ from the human rating, while the automatic similarity a) Because you may save up to eight lives through organ donation and enhance many others through tissue donation. c) organ donation is a great way to save up to eight lives. a') This medical research is important to understanding diseases in humans so that lives may be saved and improved. c') medical research is important to understand diseases in humans rating of the premises differs maximally from the human rating. It is in exactly these situations that a conclusion assessment will prove most useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "Features for assessing conclusion usefulness U We assume the following features for modeling the usefulness of a conclusion, which we compute with our similarity function f : i) the similarity of the arguments f (a, a ); ii) the similarity of the conclusions f (c, c ); iii) the (signed) difference between the argument and the conclusion similarities f (a, a ) \u2212 f (c, c ); iv) we compute the (signed) difference between the similarity of (a, c) and (a , c ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "f (a,c)\u2212f (a ,c ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "; finally, v) y is the human rating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "Results Table 4 shows that the highest predictive power for conclusion usefulness is feature iii): the similarity of the two arguments minus the similarity of the two conclusions. It exhibits a highly significant positive correlation with conclusion usefulness, and relates to the following scenario: If two arguments are considered to be similar, but the conclusions as dissimilar, this may signal that the arguments are rated dissimilar by the human, and the high initial rating may be reconsidered. Table 5 shows a data sample where the conclusions help to correct an initial, over-optimistic similarity rating of the premises. The premises are rated dissimilar by the human, but since they contain similar concepts, such as saving lives, the AMR metric assigns a high similarity rating (0.7) to the pair (a, a ). However, the automatically generated conclusions (c, c ) are assigned low(er) similarity (0.2). The low rating can be explained by the fact that the conclusion generator has distilled different conclusions from the premises that reflect the dif-ferent foci of the arguments: the first proposes that organ donations are good for saving lives, while the second argument proposes that generally more medical research should be conducted.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 502,
"end": 509,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Conclusion usefulness",
"sec_num": "6.5"
},
{
"text": "Explanation dimensions Our argument similarity rating approach may provide explanations in various dimensions. i) First and foremost, the explicit alignment and similarity computation based on AMR and AMR graph metrics, by relating similar concepts between arguments and their conclusions, provides insight into which components of two argument AMRs relate to each other, with individual alignment scores, and to what extent they congtribute to the overall score. Especially in light of recent observations showing supervised models to be prone to superficial cues in data sets Niven and Kao, 2019; Heinzerling, 2020; Jo et al., 2021) , this property is desirable. ii) We apply the fine-grained AMR decomposition of Damonte et al. (2017) in terms of semantic phenomena, such as negation or semantic roles. This can further illuminate in which ways an argument pair is similar/dissimilar. iii) By taking into account the similarity of automatically inferred conclusions, the similarity computed for premises may be re-adjusted in case the similarity of the inferred conclusions strongly differs.",
"cite_spans": [
{
"start": 578,
"end": 598,
"text": "Niven and Kao, 2019;",
"ref_id": "BIBREF29"
},
{
"start": 599,
"end": 617,
"text": "Heinzerling, 2020;",
"ref_id": "BIBREF16"
},
{
"start": 618,
"end": 634,
"text": "Jo et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 716,
"end": 737,
"text": "Damonte et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "On a related note, the AMR similarity statistics enabled us to gain some first indications of what could be considered a good conclusion (without even matching against a reference): e.g., our qualitative evaluations indicate that good conclusions tend to be neither very similar, nor very dissimilar to the premise. This seems plausible, since (too) high similarity may indicate a mere summary (reducing novelty), while (too) low similarity may indicate a lack of coherence (reducing validity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "A key component of our approach that influences all aspects of explainability described above, are the similarity metrics computed over AMRs. While we proposed one variant of S 2 MATCH that focuses specifically on the similarity of concepts, further variations could be explored. We may also consider more recent AMR metrics that measure meaning similarity via graph kernels .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perspectives of improvement and future work",
"sec_num": null
},
{
"text": "Our approach also hinges on the quality of the inferred conclusions. The conclusions we obtained are often either justified or novel, but less often satisfy both conditions. In addition, we find that the degree of novelty is often rather small, perhaps reflecting that the T5 generator was pre-trained on summarization data and hence may tend to produce inferences that are not novel, since novelty is not a common characteristics of a summary. On the positive side, our approach can be fueled by an increasing amount of research on argument conclusion generation (Alshomary et al., 2020 (Alshomary et al., , 2021 . In general, and particularly for our approach, it will be interesting to work with systems that produce not only a single, but multiple valid conclusions. Considering relations across and within two conclusion sets inferred from two premises may provide key information on argument similarity.",
"cite_spans": [
{
"start": 564,
"end": 587,
"text": "(Alshomary et al., 2020",
"ref_id": "BIBREF4"
},
{
"start": 588,
"end": 613,
"text": "(Alshomary et al., , 2021",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perspectives of improvement and future work",
"sec_num": null
},
{
"text": "Finally, by measuring the similarity of premises and their conclusions, our approach could shed light on another important question: how to assess novelty and justification of a conclusion without a reference? This is an important question for research on argument conclusion generation since it lacks methods that can judge the quality of conclusions in the absence of (costly) references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perspectives of improvement and future work",
"sec_num": null
},
{
"text": "In this paper, we investigated two hypotheses: i) AMR meaning representation and graph metrics help in assessing argument similarity, ii) automatically inferred conclusions can aid or reinforce the similarity assessment of arguments. We find solid evidence for the first hypothesis, especially when slightly adapting AMR metrics to focus more on concrete concepts found in arguments. We find weak evidence that supports the second hypothesis, i.e., metrics improve consistently, but by small margins, when they are allowed to additionally consider the AMRs of automatically inferred conclusions. We believe, however, that more substantial gains may be obtained in future work, by improving conclusion generation models such that they produce content that is both valid and novel. Finally, we have made first steps towards a reference-less metric for assessing novelty and justification of generated conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://github.com/Heidelberg-nlp/ amr-argument-sim",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We choose a high value of \u03bb since, clearly, the premises are bound to host the primary evidence for similarity, while a conclusion may serve as auxiliary information. In our experiments, we also consider extreme decompositions (\u03bb \u2208 {0, 1}).3 https://github.com/bjascob/amrlib",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If the cosine similarity exceeds \u03c4 = 0.95. and rational conclusions of high linguistic quality.5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For further detail on this fine-tuning step see Appendix.6 Strictly speaking, this is not a fully unsupervised setup, however, we stick to this framing of the task to facilitate comparison to the previous work(Reimers et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Motivated by this result, we conduct two extreme ablations: concept-only and structure-only metrics. While the structure-only variant shows worse results than AMR S-focus (macro F1 \u2206f (a, a ): -2.7), concept-only variant and conceptfocused are more or less on par (macro F1 \u2206f (a, a ): -0.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "AMR metrics have been previously used in NLG evaluation by.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "I.e., given the mean \u00b5 of a feature x, the new valuex i of datum i is x i = 1 \u2212 (\u00b5 \u2212 xi) 2 .11 We average all results over 25 runs of leave-one-out cross validations. When predicting either justification, or novelty, we average over the two annotators; when predicting justification and novelty, to increase the positive class labels slightly, the gold target are cases where one or two annotators annotated both novel and justified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Whenever we encounter multiple premises or supportive claims of a single claim, we concatenate them in document order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their valuable comments. This work has been partially funded by the DFG through the project AC-CEPT as part of the Priority Program \"Robust Argumentation Machines\" (SPP1999).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "To fine-tune the sequence to sequence language model T5 for conclusion generation, we create training data from the the persuasive essays dataset of Stab and Gurevych (2017) as follows: From all premise-conclusion-pairs annotated in this dataset, we retrieved all claims with their annotated premises. In addition, we employ all annotated major claims with their supportive claims as premise-conclusion-pairs. 12 We discarded samples for which we cannot retrieve any premise. Each resulting premises-conclusion-sample has 3.1 premises on average.We split the data into 80% instances for training, and 10% for validation and testing, each. For each sample, we input the concatenated premises by encoding summarize:<premises> and train with the conclusion as a target by applying a crossentropy loss for each token. We guide the training process with an early stopping mechanism to ensure the best accuracy (ignoring padding tokens) on our validation dataset. In inference, we apply a 5-beam-search in combination with sampling over the 20 most probable tokens per inference step.To assess the quality and relatedness of the generated conclusions, we manually compared the predicted conclusions with their premises in our test split. Since we observed promising and appropriate conclusion generations, we were encouraged to utilize the learned capabilities of the fine-tuned language model to generate conclusions for the argumentative sentences in the UKP aspect corpus.",
"cite_spans": [
{
"start": 149,
"end": 173,
"text": "Stab and Gurevych (2017)",
"ref_id": "BIBREF46"
},
{
"start": 410,
"end": 412,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Fine-tuning the conclusion generator",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Data acquisition for argument search: The args. me corpus",
"authors": [
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2019,
"venue": "Joint German/Austrian Conference on Artificial Intelligence (K\u00fcnstliche Intelligenz)",
"volume": "",
"issue": "",
"pages": "48--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Data acquisition for argument search: The args. me corpus. In Joint German/Austrian Con- ference on Artificial Intelligence (K\u00fcnstliche Intelli- genz), pages 48-59. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "What works and what does not: Classifier and feature analysis for argument mining",
"authors": [
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Alfred",
"middle": [],
"last": "Sliwa",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ruishen",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Niravkumar",
"middle": [],
"last": "Borad",
"suffix": ""
},
{
"first": "Seyedeh",
"middle": [],
"last": "Ziyaei",
"suffix": ""
},
{
"first": "Mina",
"middle": [],
"last": "Ghobadi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "91--96",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5112"
]
},
"num": null,
"urls": [],
"raw_text": "Ahmet Aker, Alfred Sliwa, Yuan Ma, Ruishen Lui, Niravkumar Borad, Seyedeh Ziyaei, and Mina Ghobadi. 2017. What works and what does not: Classifier and feature analysis for argument mining. In Proceedings of the 4th Workshop on Argument Mining, pages 91-96, Copenhagen, Denmark. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "End-to-end argumentation knowledge graph construction",
"authors": [
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Jochim",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Bonin",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "7367--7374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalid Al-Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, and Benno Stein. 2020. End-to-end argumentation knowledge graph construction. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 34, pages 7367-7374.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Belief-based generation of argumentative claims",
"authors": [
{
"first": "Milad",
"middle": [],
"last": "Alshomary",
"suffix": ""
},
{
"first": "Wei-Fan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Timon",
"middle": [],
"last": "Gurcke",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "224--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milad Alshomary, Wei-Fan Chen, Timon Gurcke, and Henning Wachsmuth. 2021. Belief-based genera- tion of argumentative claims. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 224-233, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Target inference in argument conclusion generation",
"authors": [
{
"first": "Milad",
"middle": [],
"last": "Alshomary",
"suffix": ""
},
{
"first": "Shahbaz",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4334--4345",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.399"
]
},
"num": null,
"urls": [],
"raw_text": "Milad Alshomary, Shahbaz Syed, Martin Potthast, and Henning Wachsmuth. 2020. Target inference in argument conclusion generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4334-4345, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Abstract Meaning Representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reconstructing implicit knowledge with language models",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Siting",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures",
"volume": "",
"issue": "",
"pages": "11--24",
"other_ids": {
"DOI": [
"10.18653/v1/2021.deelio-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Becker, Siting Liang, and Anette Frank. 2021. Reconstructing implicit knowledge with language models. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Ex- traction and Integration for Deep Learning Architec- tures, pages 11-24, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "748--752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Arguenet: An argument-based recommender system for solving web search queries",
"authors": [
{
"first": "Ana",
"middle": [
"G"
],
"last": "Carlos Iv\u00e1n Chesnevar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maguitman",
"suffix": ""
}
],
"year": 2004,
"venue": "2nd International IEEE Conference on'Intelligent Systems'. Proceedings (IEEE Cat. No. 04EX791)",
"volume": "1",
"issue": "",
"pages": "282--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Iv\u00e1n Chesnevar and Ana G Maguitman. 2004. Arguenet: An argument-based recommender system for solving web search queries. In 2004 2nd Inter- national IEEE Conference on'Intelligent Systems'. Proceedings (IEEE Cat. No. 04EX791), volume 1, pages 282-287. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1070"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An incremental parser for Abstract Meaning Representation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "536--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for Abstract Mean- ing Representation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 536-546, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games",
"authors": [
{
"first": "Dung",
"middle": [],
"last": "Phan Minh",
"suffix": ""
}
],
"year": 1995,
"venue": "Artificial intelligence",
"volume": "77",
"issue": "2",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phan Minh Dung. 1995. On the acceptability of argu- ments and its fundamental role in nonmonotonic rea- soning, logic programming and n-person games. Ar- tificial intelligence, 77(2):321-357.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Argument from expert opinion as legal evidence: Critical questions and admissibility criteria of expert testimony in the american legal system",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Godden",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walton",
"suffix": ""
}
],
"year": 2006,
"venue": "Ratio Juris",
"volume": "19",
"issue": "3",
"pages": "261--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Godden and Douglas Walton. 2006. Argu- ment from expert opinion as legal evidence: Criti- cal questions and admissibility criteria of expert tes- timony in the american legal system. Ratio Juris, 19(3):261-286.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Nlp's clever hans moment has arrived",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Heinzerling",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Cognitive Science",
"volume": "21",
"issue": "1",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Heinzerling. 2020. Nlp's clever hans mo- ment has arrived. Journal of Cognitive Science, 21(1):159-168.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Classifying argumentative relations using logical mechanisms and argumentation schemes",
"authors": [
{
"first": "Yohan",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "Seojin",
"middle": [],
"last": "Bang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yohan Jo, Seojin Bang, Chris Reed, and Eduard H. Hovy. 2021. Classifying argumentative relations us- ing logical mechanisms and argumentation schemes. CoRR, abs/2105.07571.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "From treebank to propbank",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kingsbury and Martha Palmer. 2002. From tree- bank to propbank. In Proceedings of the Third In- ternational Conference on Language Resources and Evaluation (LREC'02).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploiting Background Knowledge for Argumentative Relation Classification",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Kobbe",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Ioana",
"middle": [],
"last": "Hulpus",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Stuckenschmidt",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "2nd Conference on Language, Data and Knowledge (LDK 2019",
"volume": "70",
"issue": "",
"pages": "1--8",
"other_ids": {
"DOI": [
"10.4230/OASIcs.LDK.2019.8"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Kobbe, Juri Opitz, Maria Becker, Ioana Hulpus, Heiner Stuckenschmidt, and Anette Frank. 2019. Exploiting Background Knowledge for Ar- gumentative Relation Classification. In 2nd Con- ference on Language, Data and Knowledge (LDK 2019), volume 70 of OpenAccess Series in Informat- ics (OASIcs), pages 8:1-8:14, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scientia potentia est -on the role of knowledge in computational argumentation",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Henning Wachsmuth, Iryna Gurevych, and Goran Glava\u0161. 2021. Scientia potentia est -on the role of knowledge in computational argumenta- tion.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Explainable argument mining",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lawrence",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lawrence. 2021. Explainable argument mining. Ph.D. thesis, University of Dundee.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semantic textual similarity measures for case-based retrieval of argument graphs",
"authors": [
{
"first": "Mirko",
"middle": [],
"last": "Lenz",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ollinger",
"suffix": ""
},
{
"first": "Premtim",
"middle": [],
"last": "Sahitaj",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Bergmann",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Case-Based Reasoning",
"volume": "",
"issue": "",
"pages": "219--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirko Lenz, Stefan Ollinger, Premtim Sahitaj, and Ralph Bergmann. 2019. Semantic textual similar- ity measures for case-based retrieval of argument graphs. In International Conference on Case-Based Reasoning, pages 219-234. Springer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards an argument mining pipeline transforming texts to argument graphs",
"authors": [
{
"first": "Mirko",
"middle": [],
"last": "Lenz",
"suffix": ""
},
{
"first": "Premtim",
"middle": [],
"last": "Sahitaj",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Kallenberg",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Coors",
"suffix": ""
},
{
"first": "Lorik",
"middle": [],
"last": "Dumani",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schenkel",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Bergmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Models of Argument: Proceedings",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirko Lenz, Premtim Sahitaj, Sean Kallenberg, Christopher Coors, Lorik Dumani, Ralf Schenkel, and Ralph Bergmann. 2020. Towards an argu- ment mining pipeline transforming texts to argument graphs. Computational Models of Argument: Pro- ceedings of COMMA 2020, 326:263.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Conceptnet-a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT technology journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet-a practi- cal commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Argument component classification for classroom discussions",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Lugini",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "57--67",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5208"
]
},
"num": null,
"urls": [],
"raw_text": "Luca Lugini and Diane Litman. 2018. Argument com- ponent classification for classroom discussions. In Proceedings of the 5th Workshop on Argument Min- ing, pages 57-67, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reality: The search for objectivity or the quest for a compelling argument",
"authors": [
{
"first": "",
"middle": [],
"last": "Humberto R Maturana",
"suffix": ""
}
],
"year": 1988,
"venue": "The Irish journal of psychology",
"volume": "9",
"issue": "1",
"pages": "25--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Humberto R Maturana. 1988. Reality: The search for objectivity or the quest for a compelling argument. The Irish journal of psychology, 9(1):25-82.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "DBpedia: A multilingual cross-domain knowledge base",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Mendes",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "1813--1817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Mendes, Max Jakob, and Christian Bizer. 2012. DBpedia: A multilingual cross-domain knowledge base. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 1813-1817, Istanbul, Turkey. Eu- ropean Language Resources Association (ELRA).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Concrete vs. abstract semantics: from mental representations to functional brain mapping",
"authors": [
{
"first": "Nadezhda",
"middle": [],
"last": "Mkrtychian",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Blagovechtchenski",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Kurmakaeva",
"suffix": ""
},
{
"first": "Daria",
"middle": [],
"last": "Gnedykh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kostromina",
"suffix": ""
},
{
"first": "Yury",
"middle": [],
"last": "Shtyrov",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in human neuroscience",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadezhda Mkrtychian, Evgeny Blagovechtchenski, Di- ana Kurmakaeva, Daria Gnedykh, Svetlana Kostro- mina, and Yury Shtyrov. 2019. Concrete vs. ab- stract semantics: from mental representations to functional brain mapping. Frontiers in human neu- roscience, 13:267.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Probing neural network comprehension of natural language arguments",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Niven",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4658--4664",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1459"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4658-4664, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Argumentative relation classification as plausibility ranking",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 15th Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Opitz. 2019. Argumentative relation classification as plausibility ranking. In Proceedings of the 15th Conference on Natural Language Processing (KON- VENS 2019): Long Papers, pages 193-202, Erlan- gen, Germany. German Society for Computational Linguistics & Language Technology.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Weisfeiler-leman in the bamboo: Novel amr graph metrics and a benchmark for amr graph similarity",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Daza",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2108.11949"
]
},
"num": null,
"urls": [],
"raw_text": "Juri Opitz, Angel Daza, and Anette Frank. 2021. Weisfeiler-leman in the bamboo: Novel amr graph metrics and a benchmark for amr graph similarity. arXiv preprint arXiv:2108.11949.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dissecting content and context in argumentative relation analysis",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "25--34",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4503"
]
},
"num": null,
"urls": [],
"raw_text": "Juri Opitz and Anette Frank. 2019. Dissecting content and context in argumentative relation analysis. In Proceedings of the 6th Workshop on Argument Min- ing, pages 25-34, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Towards a decomposable metric for explainable evaluation of text generation from AMR",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1504--1518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Opitz and Anette Frank. 2021. Towards a decom- posable metric for explainable evaluation of text gen- eration from AMR. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1504-1518, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "AMR similarity metrics from principles",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Letitia",
"middle": [],
"last": "Parcalabescu",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "522--538",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00329"
]
},
"num": null,
"urls": [],
"raw_text": "Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020. AMR similarity metrics from principles. Transactions of the Association for Computational Linguistics, 8:522-538.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Argumentative relation classification with background knowledge",
"authors": [
{
"first": "Debjit",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Kobbe",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Models of Argument",
"volume": "",
"issue": "",
"pages": "319--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debjit Paul, Juri Opitz, Maria Becker, Jonathan Kobbe, Graeme Hirst, and Anette Frank. 2020. Argumen- tative relation classification with background knowl- edge. In Computational Models of Argument, pages 319-330. IOS Press.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring the limits of transfer learning with a unified text-totext transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Argumentative explanations for interactive recommendations",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rago",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cocarascu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bechlivanidis",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Lagnado",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Toni",
"suffix": ""
}
],
"year": 2021,
"venue": "Artificial Intelligence",
"volume": "296",
"issue": "",
"pages": "1--22",
"other_ids": {
"DOI": [
"10.1016/j.artint.2021.103506"
]
},
"num": null,
"urls": [],
"raw_text": "A Rago, O Cocarascu, C Bechlivanidis, D Lagnado, and F Toni. 2021. Argumentative explanations for interactive recommendations. Artificial Intelligence, 296:1-22.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Classification and clustering of arguments with contextualized word embeddings",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "567--578",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1054"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of ar- guments with contextualized word embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 567- 578, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Bankxx: a program to generate argument through case-base search",
"authors": [
{
"first": "",
"middle": [],
"last": "Edwina L Rissland",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "M Timur",
"middle": [],
"last": "Skalak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 4th international conference on Artificial intelligence and law",
"volume": "",
"issue": "",
"pages": "117--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edwina L Rissland, David B Skalak, and M Timur Friedman. 1993. Bankxx: a program to generate ar- gument through case-base search. In Proceedings of the 4th international conference on Artificial intelli- gence and law, pages 117-124.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "End-to-end argument generation system in debating",
"authors": [
{
"first": "Misa",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Kohsuke",
"middle": [],
"last": "Yanai",
"suffix": ""
},
{
"first": "Toshinori",
"middle": [],
"last": "Miyoshi",
"suffix": ""
},
{
"first": "Toshihiko",
"middle": [],
"last": "Yanase",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Iwayama",
"suffix": ""
},
{
"first": "Qinghua",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yoshiki",
"middle": [],
"last": "Niwa",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations",
"volume": "",
"issue": "",
"pages": "109--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshi- hiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument gener- ation system in debating. In Proceedings of ACL- IJCNLP 2015 System Demonstrations, pages 109- 114.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Aspect-controlled neural argument generation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00084"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2020. Aspect-controlled neural argument generation. arXiv preprint arXiv:2005.00084.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "An autonomous debating system",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bilu",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Alzate",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Bogin",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Bonin",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Edo",
"middle": [],
"last": "Cohen-Karlik",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Dankin",
"suffix": ""
},
{
"first": "Lilach",
"middle": [],
"last": "Edelstein",
"suffix": ""
}
],
"year": 2021,
"venue": "Nature",
"volume": "591",
"issue": "7850",
"pages": "379--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Slonim, Yonatan Bilu, Carlos Alzate, Roy Bar-Haim, Ben Bogin, Francesca Bonin, Leshem Choshen, Edo Cohen-Karlik, Lena Dankin, Lilach Edelstein, et al. 2021. An autonomous debating sys- tem. Nature, 591(7850):379-384.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-first AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Thirty-first AAAI conference on artificial intelligence.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Parsing argumentation structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "3",
"pages": "619--659",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00295"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43(3):619-659.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "The uses of argument",
"authors": [
{
"first": "",
"middle": [],
"last": "Stephen E Toulmin",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen E Toulmin. 2003. The uses of argument. Cam- bridge university press.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Argumentation and explainable artificial intelligence: a survey. The Knowledge Engineering Review",
"authors": [
{
"first": "Alexandros",
"middle": [],
"last": "Vassiliades",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Bassiliades",
"suffix": ""
},
{
"first": "Theodore",
"middle": [],
"last": "Patkos",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandros Vassiliades, Nick Bassiliades, and Theodore Patkos. 2021. Argumentation and explain- able artificial intelligence: a survey. The Knowledge Engineering Review, 36.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Building an argument search engine for the web",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Khalid",
"middle": [
"Al"
],
"last": "Khatib",
"suffix": ""
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Puschmann",
"suffix": ""
},
{
"first": "Jiani",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Dorsch",
"suffix": ""
},
{
"first": "Viorel",
"middle": [],
"last": "Morari",
"suffix": ""
},
{
"first": "Janek",
"middle": [],
"last": "Bevendorff",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "49--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017. Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining, pages 49-59.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Retrieval of the best counterargument without prior topic knowledge",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Shahbaz",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "241--251",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1023"
]
},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument with- out prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 241-251, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "The assessment of argumentation from expert opinion. Argumentation",
"authors": [
{
"first": "H",
"middle": [
"M"
],
"last": "Jean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wagemans",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "25",
"issue": "",
"pages": "329--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean HM Wagemans. 2011. The assessment of ar- gumentation from expert opinion. Argumentation, 25(3):329-339.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Justification of argumentation schemes",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Walton",
"suffix": ""
}
],
"year": 2005,
"venue": "The Australasian Journal of Logic",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Walton. 2005. Justification of argumentation schemes. The Australasian Journal of Logic, 3.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Argumentation schemes",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Walton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Macagno",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation schemes. Cam- bridge University Press.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Leveraging argumentation knowledge graph for interactive argument pair identification",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Donghua",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Changjian",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "2310--2319",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.203"
]
},
"num": null,
"urls": [],
"raw_text": "Jian Yuan, Zhongyu Wei, Donghua Zhao, Qi Zhang, and Changjian Jiang. 2021. Leveraging argumen- tation knowledge graph for interactive argument pair identification. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2310-2319, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Bayesian argumentation: The practical side of probability",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Zenker",
"suffix": ""
}
],
"year": 2013,
"venue": "Bayesian Argumentation",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Zenker. 2013. Bayesian argumentation: The practical side of probability. In Bayesian Argumen- tation, pages 1-11. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Standard, concept-focus and structure focus."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Full example (edge-labels omitted for simplified display) of explicit alignments between argument graphs (top) and automatically induced conclusions (bottom). Here, the conclusions help explaining argument similarity, since the alignment connects fracking in both graphs, as well as water wells and toxic wastewater, showing how contaminating of the wells (left graphs) actually happens: wells are polluted with toxic wastewater (right graphs).not justif. nov.justif. jus. & nov. not nov. Annotation results of two quality aspects with IAA: K=0.49 (justification) and K=0.57 (novelty)."
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td/><td/><td>F1 score</td><td/><td>rank</td></tr><tr><td/><td>metric</td><td>model type</td><td>macro</td><td>sim</td><td>not sim</td></tr><tr><td/><td>human</td><td>?</td><td>78.34</td><td>74.74</td><td>81.94</td><td>0</td></tr><tr><td/><td>random</td><td>-</td><td>48.01</td><td>34.31</td><td>61.71</td><td>16</td></tr><tr><td/><td>Tf-Idf</td><td>f (a, a )</td><td>61.18</td><td>52.30</td><td>70.07</td><td>10</td></tr><tr><td>Baselines</td><td colspan=\"2\">InfSnt-fText InfSnt-GloVe f (a, a ) f (a, a ) GloVe Emb. f (a, a ) ELMo Emb. f (a, a )</td><td>66.21 64.94 64.68 64.47</td><td>58.66 54.72 56.32 53.55</td><td>73.76 75.17 73.04 75.38</td><td>3/4 9 8 7</td></tr><tr><td/><td colspan=\"2\">BERT Embe. f (a, a )</td><td>65.39</td><td>52.32</td><td>78.48</td><td>6</td></tr><tr><td/><td>AMR</td><td>f (a, a )</td><td>65.44</td><td/><td/></tr></table>",
"text": "\u00b10.5 55.23 \u00b10.8 75.66 \u00b10.4 5 AMR f (c, c ) 57.31 \u00b10.6 45.73 \u00b11.2 68.89 \u00b10.4 14 AMR f (a \u2295 c, a \u2295 c ) 66.21 \u00b10.3 56.98 \u00b10.6 75.42 \u00b10.1 3/4 AMR C-focus f (a, a ) 68.17 \u00b10.3 59.2 \u00b10.6 77.14 \u00b10.2 2 \u2666\u2663 AMR C-focus f (c, c ) 60.29 \u00b10.5 49.33 \u00b10.4 71.26 \u00b10.8 13 Ours AMR C-focus f (a \u2295 c, a \u2295 c ) 68.70 \u00b10.5 60.35 \u00b11.0 77.04 \u00b10.1 1 \u2666\u2663 AMR S-focus f (a, a ) 60.74 \u00b10.5 49.94 \u00b10.8 71.55 \u00b10.5 12 AMR S-focus f (c, c ) 56.48 \u00b10.3 44.96 \u00b10.6 67.99 \u00b10.2 15 AMR S-focus f (a \u2295 c, a \u2295 c ) 61.14 \u00b10.3 49.74 \u00b10.5 72.55 \u00b10.5 11"
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Main results."
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>cupying rank 3-5 of all examined methods. 7</td></tr><tr><td>6 Analyses &amp; Explainability</td></tr><tr><td>While these model ablations provide a global view</td></tr><tr><td>of what matters in argument similarity rating, we</td></tr><tr><td>now analyze the impact of finer semantic features.</td></tr></table>",
"text": "Semantic predictors of human argument similarity. \u2020/ \u2021: significant with p<0.05/p<0.005."
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Macro F1 scores for predicted conclusion qual-</td></tr><tr><td>ity using AMR-based models f (a, c), assessing various</td></tr><tr><td>aspects. For single features, + show positive correla-</td></tr><tr><td>tion; -negative correlation (levels 0.05, 0.005, 0.0005).</td></tr></table>",
"text": ""
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Predictors of conclusion usefulness."
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "AMR metrics detecting dissimilar arguments."
}
}
}
}