ACL-OCL / Base_JSON /prefixA /json /argmining /2021.argmining-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
132 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:13.934870Z"
},
"title": "Is Stance Detection Topic-Independent and Cross-topic Generalizable? -A Reproduction Study",
"authors": [
{
"first": "Myrthe",
"middle": [],
"last": "Reuver",
"suffix": "",
"affiliation": {
"laboratory": "Computation Linguistics and Text Mining Lab",
"institution": "Vrije Universiteit Amsterdam",
"location": {}
},
"email": "myrthe.reuver@vu.nl"
},
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {}
},
"email": "s.verberne@liacs.leidenuniv.nl"
},
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": "",
"affiliation": {
"laboratory": "Computation Linguistics and Text Mining Lab",
"institution": "Vrije Universiteit Amsterdam",
"location": {}
},
"email": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": "",
"affiliation": {
"laboratory": "Computation Linguistics and Text Mining Lab",
"institution": "Vrije Universiteit Amsterdam",
"location": {}
},
"email": "antske.fokkens@vu.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Cross-topic stance detection is the task to automatically detect stances (pro, against, or neutral) on unseen topics. We successfully reproduce state-of-the-art cross-topic stance detection work (Reimers et al., 2019), and systematically analyze its reproducibility. Our attention then turns to the cross-topic aspect of this work, and the specificity of topics in terms of vocabulary and socio-cultural context. We ask: To what extent is stance detection topicindependent and generalizable across topics? We compare the model's performance on various unseen topics, and find topic (e.g. abortion, cloning), class (e.g. pro, con), and their interaction affecting the model's performance. We conclude that investigating performance on different topics, and addressing topic-specific vocabulary and context, is a future avenue for cross-topic stance detection.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Cross-topic stance detection is the task to automatically detect stances (pro, against, or neutral) on unseen topics. We successfully reproduce state-of-the-art cross-topic stance detection work (Reimers et al., 2019), and systematically analyze its reproducibility. Our attention then turns to the cross-topic aspect of this work, and the specificity of topics in terms of vocabulary and socio-cultural context. We ask: To what extent is stance detection topicindependent and generalizable across topics? We compare the model's performance on various unseen topics, and find topic (e.g. abortion, cloning), class (e.g. pro, con), and their interaction affecting the model's performance. We conclude that investigating performance on different topics, and addressing topic-specific vocabulary and context, is a future avenue for cross-topic stance detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "(Online) debate has long been studied and modelled by computational linguistics with argument mining tasks such as stance detection. Stance detection is the task of automatically identifying the stance (agreeing, disagreeing, and/or neutral) of a text towards a debated topic or issue (K\u00fc\u00e7\u00fck and Can, 2020; Schiller et al., 2021) . 1 Its use-cases increasingly relate to online information environments and societal challenges, such as argument search (Stab et al., 2018) , fake news identification (Hanselowski et al., 2018) , or diversifying stances in a news recommender (Reuver et al., 2021) .",
"cite_spans": [
{
"start": 285,
"end": 306,
"text": "(K\u00fc\u00e7\u00fck and Can, 2020;",
"ref_id": "BIBREF16"
},
{
"start": 307,
"end": 329,
"text": "Schiller et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 452,
"end": 471,
"text": "(Stab et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 499,
"end": 525,
"text": "(Hanselowski et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 574,
"end": 595,
"text": "(Reuver et al., 2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cross-topic stance detection models should thus be able to deal with the quickly changing landscape of (online) public debate, where new topics and issues appear all the time. As Schlangen (2021) described in his recent paper on natural language processing (NLP) methodology, generalization is a main goal of computational linguistics. A computational model (e.g. a stance detection model) should learn task capabilities beyond one set of datapoints, in our case: beyond one debate topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cross-topic stance detection is especially challenging because generalization to a new discussion topic is not trivial. Expressing stances is inherently socio-cultural behavior (Du Bois, 2007) , where social actors place themselves and targets on dimensions in the socio-cultural field. This also comes with very topic-specific word use (Somasundaran and Wiebe, 2009; Wei and Mao, 2019) . For instance, an against abortion argument might be expressed indirectly with a 'pro-life' expression, and someone aware of the socio-cultural context of this debate will be able to recognize this. Knowledge from other debate topics such as gun control may not be useful, since the debate strategies might change per topic. Despite these fundamental challenges, pre-trained Transformer models show promising results on cross-topic argument classification (Reimers et al., 2019; Schiller et al., 2021) .",
"cite_spans": [
{
"start": 181,
"end": 192,
"text": "Bois, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 337,
"end": 367,
"text": "(Somasundaran and Wiebe, 2009;",
"ref_id": "BIBREF25"
},
{
"start": 368,
"end": 386,
"text": "Wei and Mao, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 844,
"end": 866,
"text": "(Reimers et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 867,
"end": 889,
"text": "Schiller et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate the ability of crosstopic stance detection approaches to generalize to different debate topics. Our question is: To what extent is stance detection topic-independent and generalizable across topics?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are threefold. We first complete a reproduction of state-of-the-art cross-topic stance detection work (Reimers et al., 2019) , as reproduction has repeatedly shown to be relevant for NLP (Fokkens et al., 2013; Cohen et al., 2018; Belz et al., 2021) . The reproduction is largely successful: we obtain similar numeric results. Secondly, we investigate the topic-specific performance of this model, and conclude that BERT's performance fluctuates on different topics. Additionally, we find that a bag-of-words-based SVM model can rival its performance for some topics. Thirdly, we relate this to the nature of the stance detection modelling task, which is inherently more connected to sociocultural aspects and topic-specific differences than related tasks such as sentiment analysis.",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Reimers et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 205,
"end": 227,
"text": "(Fokkens et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 228,
"end": 247,
"text": "Cohen et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 248,
"end": 266,
"text": "Belz et al., 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. Section 2 discusses earlier work on stance detection, and specifically generalizability across topics. Section 3 presents the reproduction results. Section 4 adds additional, topic-specific analyses of the classification performance and a bag-of-words-based model to find topic-(in)dependent features. This is followed by our conclusions in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Stance detection is a long-established task in computational linguistics. K\u00fc\u00e7\u00fck and Can (2020) identify its most commonly used task definition: \"For an input in the form of a piece of text and a target pair, stance detection is a classification problem where the stance of the author of the text is sought in the form of a category label from this set: Favor, Against, Neither.\" (K\u00fc\u00e7\u00fck and Can, 2020, p. 2) . 2 The number of stance classes can vary from 2 to 4, e.g. by adding 'comment' and 'query' next to 'for' and 'against' (Schiller et al., 2021) . K\u00fc\u00e7\u00fck and Can (2020) emphasize that this computational definition is built upon the linguistic phenomenon of actors communicating their evaluation of targets, by which they place themselves and their targets on \"dimensions in the sociocultural field\" (Du Bois, 2007, p. 163) . Current work focuses mostly on debates deemed controversial in the U.S. sociopolitical domain, such as abortion and gun control.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "K\u00fc\u00e7\u00fck and Can (2020)",
"ref_id": "BIBREF16"
},
{
"start": 379,
"end": 406,
"text": "(K\u00fc\u00e7\u00fck and Can, 2020, p. 2)",
"ref_id": null
},
{
"start": 409,
"end": 410,
"text": "2",
"ref_id": null
},
{
"start": 527,
"end": 550,
"text": "(Schiller et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 553,
"end": 573,
"text": "K\u00fc\u00e7\u00fck and Can (2020)",
"ref_id": "BIBREF16"
},
{
"start": 808,
"end": 827,
"text": "Bois, 2007, p. 163)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Definition of Stance Detection",
"sec_num": "2"
},
{
"text": "Early work on stance detection focused on parliamentary debates and longer texts (Thomas et al., 2006) . Since Mohammad et al. (2016) 's stance detection shared task, Twitter has attracted a lot of attention in NLP work on stance detection (Zhu et al., 2019; Darwish et al., 2020; Hossain et al., 2020) . Others addressed stance detection in the news domain, with (fake) news headlines (Ferreira and Vlachos, 2016; Hanselowski et al., 2018) , disinformation (Hardalov et al., 2021) and user comments on news websites (Bo\u0161njak and Karan, 2019) .",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Thomas et al., 2006)",
"ref_id": "BIBREF28"
},
{
"start": 111,
"end": 133,
"text": "Mohammad et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 240,
"end": 258,
"text": "(Zhu et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 259,
"end": 280,
"text": "Darwish et al., 2020;",
"ref_id": "BIBREF6"
},
{
"start": 281,
"end": 302,
"text": "Hossain et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 386,
"end": 414,
"text": "(Ferreira and Vlachos, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 415,
"end": 440,
"text": "Hanselowski et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 458,
"end": 481,
"text": "(Hardalov et al., 2021)",
"ref_id": null
},
{
"start": 517,
"end": 542,
"text": "(Bo\u0161njak and Karan, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior work",
"sec_num": "2.2"
},
{
"text": "Feature-based approaches have largely been replaced by end-to-end neural models. Stance detec-tion has seen a performance increase due to pretrained Transformer models such as BERT (Devlin et al., 2019) . Reimers et al. (2019) reported .20 point F1 improvement over an LSTM baseline with a pre-trained BERT model. Combining multiple stance detection datasets in fine-tuning such a pretrained Transformer again led to a performance increase, though this model lacks robustness against slight test set manipulations (Schiller et al., 2021) .",
"cite_spans": [
{
"start": 181,
"end": 202,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 205,
"end": 226,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 514,
"end": 537,
"text": "(Schiller et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior work",
"sec_num": "2.2"
},
{
"text": "Recent work has specifically worked on identifying stances on topics not seen in training. Reimers et al. (2019) train their model on detecting stances and arguments for unseen topics. In their approach however, they treat all topics and stances on these topics as similar and comparable, and report one averaged evaluation metric over topics.",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization to new topics",
"sec_num": "2.3"
},
{
"text": "Earlier work (Somasundaran and Wiebe, 2009) already established that ideological stances on topics deemed controversial, such as gay rights, are expressed in a topic-specific manner. Topic-specific features were more informative for SVM models than more topic-independent features.",
"cite_spans": [
{
"start": 13,
"end": 43,
"text": "(Somasundaran and Wiebe, 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization to new topics",
"sec_num": "2.3"
},
{
"text": "In more recent work, Wei and Mao (2019) instead specifically focus on how generalizable certain topics are for transferring knowledge to new topics on stance detection. Some Twitter discussion topics seem to share a latent, underlying topic (e.g. both feminism and abortion have the latent topic of equality). In a (latent) topic-enhanced multi-layer perceptron (MLP) model with RNN representation of the tweet, the model indeed uses shared vocabulary between the related topics. Allaway et al. (2021) notice that earlier work, when considering training on some topics and testing on others, incorporates topic-relatedness. Unlike these other studies however, Allaway et al. (2021, p. 4756 ) \"do not assume a relationship between training and test topics\" as a fairer test of robustness. Results they present do show that stance detection is related to topic, but their efforts go to finding topic-invariant stance representations, which improves the generalizability of their model. Their consideration of topic similarity shows that topic difference is very relevant to stance detection.",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "Wei and Mao (2019)",
"ref_id": "BIBREF30"
},
{
"start": 480,
"end": 501,
"text": "Allaway et al. (2021)",
"ref_id": "BIBREF1"
},
{
"start": 660,
"end": 689,
"text": "Allaway et al. (2021, p. 4756",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization to new topics",
"sec_num": "2.3"
},
{
"text": "ALDayel and Magdy (2021) describe in their survey how several studies (Klebanov et al., 2010; Zhu et al., 2019; Darwish et al., 2020) show that texts pro or against an issue use different vocabularies (e.g. using 'pro-life' when expressing a stance against abortion). Some of these studies attempt to leverage these vocabularies to generalize across similar topics. Recent work has looked into generalizing stance detection across datasets, task definitions, and domains (Schiller et al., 2021) , in which topic-specific performance is not mentioned.",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "(Klebanov et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 94,
"end": 111,
"text": "Zhu et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 112,
"end": 133,
"text": "Darwish et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 471,
"end": 494,
"text": "(Schiller et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization to new topics",
"sec_num": "2.3"
},
{
"text": "A recent approach to topic-specificity in stance detection is task adaptation. Stein et al. (2021) acknowledge that stance detection usually requires knowledge about the topic of discussion, which is not available for unseen topics. They approach this problem by changing the task to \"same-side stance classification\", in which a model is trained to classify whether two arguments either have the same or a different stance. This reduces the model's leaning on topic-specific pro-and con-vocabulary, while still being able to separate different stances on the same topic. The best approach to this adapted task on a dedicated leaderboard 3 receives an F1 of .72 in the cross-topic setting with a fine-tuned BERT model (Ollinger et al., 2020) .",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "Stein et al. (2021)",
"ref_id": null
},
{
"start": 718,
"end": 741,
"text": "(Ollinger et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization to new topics",
"sec_num": "2.3"
},
{
"text": "Our current work adds the discussion of topic difference and topic specificity to state-of-the-art stance detection results. That is, earlier bag-ofwords-based work considered lexical specificity of different topics for stance detection, and we add that into the discussion for the current state of the art: pre-trained, end-to-end neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization to new topics",
"sec_num": "2.3"
},
{
"text": "Reimers et al. (2019) apply their approach of crosstopic claim classification to two datasets: the UKP Sentential Argument Mining Corpus (Stab et al., 2018 ) ('the UKP dataset') and the IBM Debater: Evidence Sentences dataset (Shnarch et al., 2018) ('the IBM dataset'). We focus on the UKP Dataset, since the IBM Debater dataset has no 'pro' and 'con' class, but rather 'evidence' and 'no evidence' (and our focus is on stance detection). As a second step after stance classification, the authors also attempt to cluster similar arguments within the same topic in a cross-topic training setting. We do not replicate this component, but instead dive deeper into the classification results.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "(Stab et al., 2018",
"ref_id": "BIBREF26"
},
{
"start": 226,
"end": 248,
"text": "(Shnarch et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reproduction Experiments",
"sec_num": "3"
},
{
"text": "We adopt the definition of reproduction by Belz et al. (2021) : repeating the experiments as described in the earlier study, with the exact same data and software. We analyze our reproduced results according to the three dimensions of repro-duction proposed by Cohen et al. 2018: whether we find either the same or different (1) (numeric) values, (2) findings, and (3) conclusions as the earlier study. 4 Reproducing the same values means obtaining the same numeric results from a specific experiment. Experiments involving fine-tuning on BERT are non-deterministic. We therefore consider the metric fully reproduced if the original result lies within two standard deviations (stdevs) from our result, obtained from 10 random seeds. 5 The same finding means that the relation between the values associated with two or more dependent variables is the same, i.e. a system that outperformed another in the original study also does this in the reproduced study. The conclusion is the same when the broader implication of findings and values is the same. Conclusions are thus a matter of interpretation. As such, the same findings can lead to different conclusions and conclusions are, contrary to findings, not repeatable (Cohen et al., 2018) . This section focuses on the repeatable components of reproducing a study: the values and the findings. We address the conclusions using our more detailed analyses in Section 4.",
"cite_spans": [
{
"start": 43,
"end": 61,
"text": "Belz et al. (2021)",
"ref_id": "BIBREF2"
},
{
"start": 403,
"end": 404,
"text": "4",
"ref_id": null
},
{
"start": 733,
"end": 734,
"text": "5",
"ref_id": null
},
{
"start": 1218,
"end": 1238,
"text": "(Cohen et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reproduction Experiments",
"sec_num": "3"
},
{
"text": "The UKP dataset (Stab et al., 2018) consists of 25,492 argument sentences from 400 Internet texts (from essays to news texts) on 8 topics. The dataset designer's definition of claim is \"a span of text expressing evidence or reasoning that can be used to either support or oppose a given topic\" (Stab et al., 2018, p. 3665) . They define topic as \"some matter of controversy for which there is an obvious polarity for possible outcomes\" (Stab et al., 2018, p. 3665) , and map this polarity to a text expressing one of two classes: for or against the use, adoption, or idea of the topic under discussion. A third class is 'no argument' to the topic under discussion, i.e. the text span falls outside of this polarity.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "(Stab et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 294,
"end": 322,
"text": "(Stab et al., 2018, p. 3665)",
"ref_id": null
},
{
"start": 436,
"end": 464,
"text": "(Stab et al., 2018, p. 3665)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3.1"
},
{
"text": "The 8 topics in the dataset were randomly chosen from online lists of controversial topics on discussion websites (Stab et al., 2018, p. 3666) . Specifically, these topics are abortion, cloning, death penalty, gun control, marijuana legalization, minimum wage, nuclear energy and school uniforms. The stance classes (pro, con, and no argument) were annotated by two argument mining experts and seven U.S. crowdworkers. The distribution of the dataset for different topics is shown in Table 1 .",
"cite_spans": [
{
"start": 114,
"end": 142,
"text": "(Stab et al., 2018, p. 3666)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 484,
"end": 491,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3.1"
},
{
"text": "In Stab et al. (2018) we see a difference in agreement on stance classes in different topics, especially between expert and crowd. The topic achieving the highest agreement between crowd worker and expert is school uniforms (\u03ba = .889), and the lowest is death penalty (\u03ba = .576). The standard deviation over topics is .08 for expert-expert coded data and .16 for expert-crowd coded, both with a mean of \u03ba = .72.",
"cite_spans": [
{
"start": 3,
"end": 21,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3.1"
},
{
"text": "The UKP Dataset is not available online due to copyright concerns, but there is a scraping script with archived hyperlinks available on Reimers et al. (2019)'s GitHub page. We ran this script with all specifications given. The scraping script was able to return all claims on 6 of the 8 topics. The topics for which not all claims were detected were nuclear energy and minimum wage. We then instead obtained the complete datafiles from the authors. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining the Data",
"sec_num": "3.2"
},
{
"text": "Reimers et al. 2019use the training method described in Stab et al. (2018) . Each topic is split into a training (70%), development (10%), and test split (20%). Training is done on the training splits of 7 topics, tuned on the development split (10%) of these 7 topics, and finally evaluated on the test split (20%) of the held-out 8th topic. They do this for each of the 8 topics (holding out a different topic each time), then apply this procedure for 10 different random seeds on a GPU. Evaluation is assessed with macro F1, averaged over all topics and all random seeds. Their best performing model is a fine-tuned BERT-large model (Devlin et al., 2019) , but with only minor improvement over BERT-base.",
"cite_spans": [
{
"start": 56,
"end": 74,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 636,
"end": 657,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Evaluation Method",
"sec_num": "3.3"
},
{
"text": "We use the same training set-up and BERT models for our reproduction. For training, we use the author's code with Python3.8 on a single NVIDIA GeForce RTX 2080 Ti GPU. Our learning rate is 2e-5 for both models, as in Reimers et al. (2019) . 7 We additionally train a non-BERT model (a Support Vector Machine (SVM) with tf-idf features) in the same hold-one-topic-out manner. Tf-idf-based approaches have shown quite solid performance on stance detection in prior work (Riedel et al., 2017) . This model is deterministic and is thus not run with multiple seeds. It is run with Python3.9 and the sklearn package. The SVM is intended for the feature analysis in Section 4.3, but we present the performance of this model also in Table 2 and the following section.",
"cite_spans": [
{
"start": 217,
"end": 238,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 241,
"end": 242,
"text": "7",
"ref_id": null
},
{
"start": 468,
"end": 489,
"text": "(Riedel et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 725,
"end": 732,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Training and Evaluation Method",
"sec_num": "3.3"
},
{
"text": "BERT-base Table 2 shows that mean performance over the 3 classes ('pro', 'con', or 'no argument') is F1 = .617 (stdev over 10 seeds = .006). Reimers et al. (2019) 's reported result (F1 = .613) lies within 1 stdev from this result.",
"cite_spans": [
{
"start": 141,
"end": 162,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results of Reproduction",
"sec_num": "3.4"
},
{
"text": "BERT-large Mean performance over all topics and stance classes is F1 = .596 (stdev over 10 seeds = .043). The performance reported in Reimers et al. (2019) is F1 = .633, which lies within 2 stdev of our result. However, our stdev is relatively high due to high variance of performance over different seeds, with half of our seeds performing noticeably lower than even BERT-base. 8 For the other 5 seeds, the model performed better (F1 = .636, stdev = .007), and within one (much smaller) stdev of the performance reported in Reimers et al. (2019) .",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 525,
"end": 546,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Reproduction",
"sec_num": "3.4"
},
{
"text": "SVM+tf-idf (non-BERT model) This model performs at F1 = .517 averaged over the held-out topics and three classes ('pro', 'con', and 'no argument'), see Table 2 . This outperforms by .10 points in F1 the best performing LSTM-based architecture presented in Stab et al. (2018) (F1 = .424) , a baseline in Reimers et al. (2019) . Their performance improvement of the BERT model over LSTM was .20 in F1. Comparing our SVM model to BERT, we find a smaller improvement over a non-BERT model: .10 F1 improvement for BERT-base (F1 = .617). Our BERT models still outperform our 7 All our code can be found in the following GitHub repository: https://github.com/myrthereuver/ claims-reproduction.",
"cite_spans": [
{
"start": 256,
"end": 274,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 303,
"end": 324,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 275,
"end": 286,
"text": "(F1 = .424)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results of Reproduction",
"sec_num": "3.4"
},
{
"text": "8 Our large variance in performance over seeds is due to each seed fine-tuning the model 8 times (once for each topic). The 5 unevenly performing seeds each under-perform on a different topic (F1 < .50) due to only assigning the majority class ('no argument'). Other topics in these 5 seeds do outperform BERT-base. Reimers et al. (2019) . The fourth row shows our non-BERT model (an SVM) beating their LSTM baseline, and the fourth and fifth row show the results of our BERT reproductions. The sixth row shows an average BERT-large performance without the 5 seeds that considerably under-performed for one topic. non-BERT model, as in Reimers et al. (2019) . Our SVM result does fall within 2 stdevs of BERT-large, but this is due to BERT-large's substantial stdev due to a steep drop in performance for half of the seeds.",
"cite_spans": [
{
"start": 316,
"end": 337,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 636,
"end": 657,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Reproduction",
"sec_num": "3.4"
},
{
"text": "Reimers et al. (2019)'s results are reproducible in the sense the first dimension of reproducibility (Cohen et al., 2018): the originally reported numeric values fell within 2 stdevs of our reproduced results for both BERT-base and BERT-large. For BERT-base and 5 of the 10 seeds in BERT-large, we obtained a precision, recall, and F1 that are very similar to the original study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion of reproduction",
"sec_num": "3.5"
},
{
"text": "The results are also reproducible in four of the five reproducibility aspects identified by Fokkens et al. (2013) : under-descriptions of preprocessing, experimental set-up, versioning, and system output. These were described in either the paper, on the author's GitHub page, or in code documentation. We do observe differences in relation to 'system variation' which is inherent to training neural networks, where identical results are seldom obtained. These variations were small for most experiments, except for the 5 random seeds that led to substantial under-performing on one topic for BERT-large.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "Fokkens et al. (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion of reproduction",
"sec_num": "3.5"
},
{
"text": "When looking at the second dimension of reproducibility defined by Cohen et al. (2018) (findings), we observe that BERT-base and BERT-large indeed clearly outperform the LSTM baselines from Stab et al. (2018) as well as our own stronger SVM+tfidf non-BERT model on the stance detection task. We were able to reproduce the reported increase in performance of BERT-large over BERT-base and non-BERT models. However, BERT-large also showed considerable under-performance on one topic in 5 out of 10 seeds. We see this outcome as a confirmation that it is important to look at different seeds, and that care should be taken when drawing conclusions based on minor differences when working with neural models.",
"cite_spans": [
{
"start": 190,
"end": 208,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion of reproduction",
"sec_num": "3.5"
},
{
"text": "The third dimension of reproducibility is that of conclusions. Reimers et al. (2019) conclude that BERT strongly outperforms previous results on identifying arguments for unseen topics, which we confirm, and that these results are \"very encouraging and stress the feasibility of the task\" (Reimers et al., 2019, p. 575) . The remainder of this paper provides further analyses to investigate whether our results also lead to this overall conclusion. In particular, we investigate how our models perform on individual topics (Section 4) and generic topicindependent signals in the data (Section 4.3).",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 289,
"end": 319,
"text": "(Reimers et al., 2019, p. 575)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion of reproduction",
"sec_num": "3.5"
},
{
"text": "To support the conclusions in Reimers et al. (2019) on the success of cross-topic stance detection, we expect a relative stability of performance over topics. The following sections go into some details not explored in Reimers et al. (2019) , specifically the cross-topic performance of different topics, and the interaction between topic and class and its influence on performance. Table 3 presents the performance of the models on individual topics. The results show that some topics perform considerably worse than others with the cross-topic training method (training on seven topics and testing on the held-out eighth topic). The cloning topic performs more than .07 F1 higher than the averaged model performance (F1 = .693 vs F1 = .617). The abortion and gun control topics perform almost .09 lower than the averaged model performance (F1 = .533 & .530 vs F1 = .617). Note that a difference nearing .10 in F1 score is relatively large, as it is comparable to the difference between the SVM performance and the state-ofthe-art BERT models in the previous section.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 219,
"end": 240,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Specifics in Classification",
"sec_num": "4"
},
{
"text": "A per-topic analysis in Table 3 shows that the SVM+tf-idf model performs within .10 points of the BERT-base model for seven of the eight topics, with some performing less than .3 points lower than BERT. The only exception is the topic marijuana legalization, which performs .28 points lower than the BERT model. The large average performance increase (+.11 in F1) over SVM comes from BERTbase improving performance on this one topic. Figure 1 presents the BERT-base in-class F1 score of the three classes ('pro', 'con', 'no argument'), and in-topic averaged F1. The red line indicates the average model performance of .617. We see some consistency, e.g. the 'no argument' class consistently scoring around F1 = .80, but we also see some topic-specific behavior. Cloning, minimum wage, and school uniforms obtain higher F1 performance than average for all classes. In contrast, death penalty, gun control, and abortion perform considerably lower than the average F1 performance in the 'pro' and 'con' classes. These topics see in-class performance of even F1 < .50.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": null
},
{
"start": 434,
"end": 442,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Variance over (classes in) topics",
"sec_num": "4.1"
},
{
"text": "Each cross-topic model is trained by removing one topic from the training data. In this way, we remove a different number of training examples each time. The topics with the most training examples for a class (e.g. 'pro' in the gun control topic) therefore have a smaller training set for this class when training a cross-topic model. If there were a linear relationship between dataset size and performance, one would expect that topics with fewer training examples (and therefore more training examples left when this topic is left out of training) to do better than topics with more training examples (whose cross-topic models lose more training examples). Table 1 does show that the 'no argument' class has a three times larger proportion of the training set than the 'pro' and 'con' classes, which could explain the better performance of this class in all topics, but training set size difference does not account for the between-topic variation in the 'pro' and 'con' classes. Instead, Table 1 shows that topics with the most training examples (that means, the largest set of examples removed in a crosstopic model) do not have the worst performing cross-topic models in Figure 1 . For example, the abortion topic has relatively few 'con' examples removed (591) compared to other classes such as cloning, death penalty, and nuclear energy, and yet has the lowest in-class F1 for the 'con' class (in-class F1 = .40). Performance thus appears to be less related to the number of training examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 660,
"end": 667,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 992,
"end": 999,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1177,
"end": 1185,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Variance over (classes in) topics",
"sec_num": "4.1"
},
{
"text": "We investigated the source of low performance on the 'pro' and 'con' class in the abortion topic with confusion matrices, and compared this to a topic where pro and against did not under-perform (minimum wage). We did not pick one specific seed, but calculated the mean percentage of 'true' examples in each confusion matrix cell over all 10 seeds. In the abortion topic, 44 % of 'pro' arguments get classified as 'against', and only 33% get correctly classified as 'pro'. The minimum wage topic shows no discernible pro/against classification confusion, and 60% of all true 'pro' and 'against' arguments are correctly classified. The section below analyzes the misclassifications in low-performing topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variance over (classes in) topics",
"sec_num": "4.1"
},
{
"text": "The low performance of 'pro' and 'con' in some topics (abortion, gun control, and death penalty) warrants some further investigation. Table 5 shows four example misclassifications between 'pro' and 'con' by BERT-large in the test examples the model encountered on these topics. 9 Table 3 : BERT-base's performance in F1 (macro) on different held-out topics. The italicized difference shows the smallest difference between the SVM model and the BERT-base model (on the gun control topic), while the bolded difference shows the largest difference (on the marijuana topic). We find two types of misclassifications, each related to topic-specific differences to stance classes. The first type is misclassification due to the sociocultural background knowledge and context of a specific topic's arguments. The second type is related to a model taking the stance towards a subcomponent of a topic and confusing it for the text's overall stance on the topic, e.g. statements in the 'pro' class mostly expressing views against something else related to the argument (unwanted pregnancies, gun violence, innocents dying).",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 280,
"end": 287,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis of Misclassification",
"sec_num": "4.2"
},
{
"text": "Examples of both issues are arguments centering around \"many innocents (babies, children, mentally ill) will die\". There are 5 variations of this argument in these 3 topics: row 1 and row 3 (gun control), row 8 (abortion), and rows 9 and 12 (death penalty) in Table 5 . Not only is one usage of this argument traditionally connected to the 'pro' class of one topic (gun control), and the 'con' class of another (abortion), the implication is: innocents dying is bad. The model seems to lack this world knowledge, and for instance classifies this argument as 'pro' death penalty.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 267,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Qualitative Analysis of Misclassification",
"sec_num": "4.2"
},
{
"text": "Another salient example is row 2 of Table 5 . This argument argues in favor of gun rights for selfdefense, but the model misclassifies this as against gun control. The model also fails to connect the second amendment discussion to the against gun control class. This is the same mistake made by the LSTM-model in Stab et al. (2018, p.3671) , showing that BERT appears to not improve over LSTM on the topic-specific nuances here. In other words, it fails to correctly identify the socio-cultural dimensions (Du Bois, 2007) Table 4 : Top-features for different topics according to SVM, Pairwise F-based feature analysis. We see potentially meaningful words in italics (the 'con' class has features based on morality and legality, e.g. bills and statutes), and potential spurious features in bold (such as names websites and even of individuals).",
"cite_spans": [
{
"start": 313,
"end": 339,
"text": "Stab et al. (2018, p.3671)",
"ref_id": null
},
{
"start": 510,
"end": 521,
"text": "Bois, 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 522,
"end": 529,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis of Misclassification",
"sec_num": "4.2"
},
{
"text": "Frequency in seeds gun contr. pro con \"When high-capacity magazines were used in mass shootings, 9/10 the death rate rose 63 % and the injury rate rose 156 % .\" gun contr. con pro \"[..] The Second Amendment protects an individual right 7/10 to possess a firearm unconnected with service in a militia , and to use that arm for traditionally lawful purposes , such as self-defense within the home . \" gun contr. pro con \"In this crossfire , bullets would likely hit civilians 9/10 ( imagine a room filled with a crowd and three people shooting at each other ) and the casualty count would increase.\" gun contr. con pro \"Gun enthusiasts understand the benefit 7/10 of large ammo feeders and wish to defend them because they recognize the advantage that such feeders give.\" abortion pro con \"Not only has the biological development not yet occurred to 4/10 support pain experience , but the environment after birth , so necessary to the development of pain experience , is also yet to occur .\" abortion pro con \"Warren concludes that as the fetus satisfies only one criterion, 5/10 consciousness ( and this only after it becomes susceptible to pain ) , the fetus is not a person and abortion is therefore morally permissible .\" abortion con pro It is argued that just as it would not be permissible to refuse 2/10 temporary accommodation for the guest to protect him from physical harm , it would not be permissible to refuse temporary accommodation of a fetus . abortion con pro \"92 % of abortions in America are purely elective 3/10 -done on healthy women to end the lives of healthy children.\" death pen. con pro Mentally ill patients may be put to death . 2/10 death pen. con pro Evidence shows execution does not act as a deterrent to capital punishment. 9/10 death pen. pro con A system in place for the purpose 8/10 of granting justice can not do so for the surviving victims , unless the murderer himself is put to death . death pen. con pro CON : \" ... Since the reinstatement of the modern death pen. , 9 /10 87 people have been freed from death row because they were later proven innocent . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic True Pred Sentence",
"sec_num": null
},
{
"text": "To analyze which words are used in relation to specific stances and topics, we trained an SVM model with tf-idf features on stance detection on all topics (F1 = .573). For each class pair ('pro' vs 'con', 'pro' vs 'no-argument', etc.) , we extracted top-10 features with the highest coefficient for that specific class. Table 4 presents the most important features of the topic-agnostic model trained on all topics. Some unigrams appear meaningful for the class. For instance, in the cross-topic setting, the word \"morality\" is a feature for the 'con' class. In contrast, the 'no argument' class is often identified with words that appear to have little content-relationship to the class identity: a topic-specific pro-life website (lifenews) or someone's name ('robert').",
"cite_spans": [
{
"start": 188,
"end": 234,
"text": "('pro' vs 'con', 'pro' vs 'no-argument', etc.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "SVM and Lexical Features",
"sec_num": "4.3"
},
{
"text": "We also trained within-topic models to find whether there is topic-specific vocabulary related to stance that differs from the topic-agnostic model. Table 4 also presents the 10 most informative features for a model trained on only the abortion topic (F1 = .595). Immediately we see that there is only limited overlap with the lexical features used to decide between 'pro' and 'con' in a multi-topic scenario. Within only the abortion topic, the 'pro' and 'con' class are defined by concepts related to the lexical content of this specific discussion: babies, life, and birth. We also see the contrast between 'pro' arguments talking about reproduction and the mother, while the 'con' arguments mention life, conception, and babies. This lexical feature analysis shows no apparent overlap between the topic-specific features in the abortion model and the topic-independent features in the topic-agnostic model. This might indicate that vocabulary is quite specifically related to topics in stance detection.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "SVM and Lexical Features",
"sec_num": "4.3"
},
{
"text": "Stance detection is a difficult NLP task. Despite recent advances by pre-trained Transformers, these models have similar issues in a cross-topic setting as earlier models. This paper reproduced stance detection experiments with pre-trained Transformers by Reimers et al. (2019) , training on seven topics and testing on an eighth topic. We found similar results, but also both class and topic influencing performance. Cross-topic BERT models perform below mean model performance in some topics (abortion, gun control) on the pro and con classes.",
"cite_spans": [
{
"start": 256,
"end": 277,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion: Topic Matters",
"sec_num": "5"
},
{
"text": "This makes us pause about Reimers et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion: Topic Matters",
"sec_num": "5"
},
{
"text": "(2019)'s main claim: does BERT improve crosstopic stance detection over non-Transformer models? We argue this claim needs an asterisk: this cross-topic approach does not work as well for all topics. Different topics show specific vocabularies and socio-cultural contexts, and especially these specific contexts BERT cannot navigate. BERT models still make similar mistakes on gun control as the LSTM-based models in Stab et al. (2018) . These findings lead us to two take-aways. Firstly, we hypothesize that models like BERT rely more on topic-specific features for stance detection than topic-independent lexical words related to argumentation. Thorn Jakobsen et al. (2021) also recently found this, and connected BERT's crosstopic stance detection performance to its focus on spurious topic-specific lexical features (\"gun\", \"criminal\") rather than words related to argumentation. They also conclude a fair real-world evaluation of cross-topic stance detection means reporting the worst performing cross-topic pair rather than average performance over topics.",
"cite_spans": [
{
"start": 416,
"end": 434,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion: Topic Matters",
"sec_num": "5"
},
{
"text": "Secondly, we also think it is necessary to analyze the context of topics, and its relation to other debate topics within and outside the dataset. Most topics in stance detection studies are currently U.S. socio-political issues. This goes beyond a limitation of language, such as a focus on English without specifying this (Bender, 2019) , since the same socio-cultural topics are not even universally relevant in the English-speaking world (gun control is not a salient discussion in Scotland). Such a focus on topic diversity is also important for usecases. For diversity of viewpoints in search (Draws et al., 2021) or news recommendation (Reuver et al., 2021) , stance detection needs to work on many different topics.",
"cite_spans": [
{
"start": 323,
"end": 337,
"text": "(Bender, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 598,
"end": 618,
"text": "(Draws et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 642,
"end": 663,
"text": "(Reuver et al., 2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion: Topic Matters",
"sec_num": "5"
},
{
"text": "Schlangen (2021) states that we need to carefully define specific NLP tasks and capabilities needed to solve them. Modelling cross-topic stance detection in a topic-agnostic manner, while divorcing it from socio-cultural context, might not do justice to stance detection. Future work might focus on the specifics of topics: analyzing similarity between discussions (Wei and Mao, 2019) , or modelling required socio-cultural contextual knowledge ('second amendment is related to gun control'). Models able to deal with topic-specific vocabulary and socio-cultural context of debates might improve on the state-of-the-art of cross-topic stance detection.",
"cite_spans": [
{
"start": 365,
"end": 384,
"text": "(Wei and Mao, 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion: Topic Matters",
"sec_num": "5"
},
{
"text": "There is a wide array of datasets, definitions, and operationalizations of stance detection and classification, and recentlySchiller et al. (2021) gave a great overview in their Section 2, as doK\u00fc\u00e7\u00fck and Can (2020) in their survey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We would like to note that the stance expressed in a text unit does not have to be the stance of an author, e.g. in cases where someone is writing a piece in which they express or quote someone else's opinion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://webis.de/events/sameside-19/, Accessed on the 22th of September 2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For reasons of clarity, we present these dimensions in reverse order compared to Cohen et al. (2018).5 The paper we reproduce,Reimers et al. (2019), does not provide model performance standard deviation over seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These files revealed that the scraping script broke down in the minimum wage topic due to one specific claim that was archived, but could not be retrieved. \"Despite the inevitable negative outcomes that will surely result from a $ 15 minimum wage -we 've already seen negative effects in Seattle 's restaurant industry -politicians and unions seem intent on engaging in an activity that could be described as an \"economic death wish.\" We speculate this claim could possibly not be retrieved due to it containing the dollar sign, https: //web.archive.org/web/20160217041546/http: //www.aei.org:80/publication/ten-reasonseconomists-object-to-the-minimum-wage/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To ensure we are not cherry-picking examples, we looked at errors that were not unique to just one seed, and identified these examples as salient examples of a general trend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is funded through Open Competition Digitalization Humanities and Social Science grant nr 406.D1.19.073 awarded by the Netherlands Organization of Scientific Research (NWO). Our computing was done through SURF Research Cloud, a national supercomputer infrastructure in the Netherlands also funded by the NWO. We would like to thank dr. Nils Reimers for sending us their paper's data. We would also like to thank the anonymous reviewers, whose very helpful comments improved the paper. All opinions and remaining errors are our own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Stance detection on social media: State of the art and trends",
"authors": [
{
"first": "Abeer",
"middle": [],
"last": "Aldayel",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2021,
"venue": "Information Processing & Management",
"volume": "58",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.ipm.2021.102597"
]
},
"num": null,
"urls": [],
"raw_text": "Abeer ALDayel and Walid Magdy. 2021. Stance detection on social media: State of the art and trends. Information Processing & Management, 58(4):102597.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Adversarial learning for zero-shot stance detection on social media",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Malavika",
"middle": [],
"last": "Srikanth",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4756--4767",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Allaway, Malavika Srikanth, and Kathleen McK- eown. 2021. Adversarial learning for zero-shot stance detection on social media. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 4756-4767, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A systematic review of reproducibility research in natural language processing",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Shubham",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "381--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anya Belz, Shubham Agarwal, Anastasia Shimorina, and Ehud Reiter. 2021. A systematic review of re- producibility research in natural language process- ing. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 381-393, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The #benderrule: On naming the languages we study and why it matters. The Gradient",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Bender",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Bender. 2019. The #benderrule: On naming the languages we study and why it matters. The Gradi- ent.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Data set for stance and sentiment analysis from user comments on croatian news",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "Bo\u0161njak",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing",
"volume": "",
"issue": "",
"pages": "50--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaela Bo\u0161njak and Mladen Karan. 2019. Data set for stance and sentiment analysis from user comments on croatian news. In Proceedings of the 7th Work- shop on Balto-Slavic Natural Language Processing, pages 50-55.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Three dimensions of reproducibility in natural language processing",
"authors": [
{
"first": "Jingbo",
"middle": [],
"last": "K Bretonnel Cohen",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [
"J"
],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Callahan",
"suffix": ""
},
{
"first": "Foster",
"middle": [],
"last": "Hargraves",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Goss",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"E"
],
"last": "Grouin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hunter",
"suffix": ""
}
],
"year": 2018,
"venue": "LREC... International Conference on Language Resources & Evaluation:[proceedings]. International Conference on Language Resources and Evaluation",
"volume": "2018",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Bretonnel Cohen, Jingbo Xia, Pierre Zweigenbaum, Tiffany J Callahan, Orin Hargraves, Foster Goss, Nancy Ide, Aur\u00e9lie N\u00e9v\u00e9ol, Cyril Grouin, and Lawrence E Hunter. 2018. Three dimensions of reproducibility in natural language processing. In LREC... International Conference on Language Re- sources & Evaluation:[proceedings]. International Conference on Language Resources and Evaluation, volume 2018, page 156. NIH Public Access.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised user stance detection on twitter",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Stefanov",
"suffix": ""
},
{
"first": "Micha\u00ebl",
"middle": [],
"last": "Aupetit",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "14",
"issue": "",
"pages": "141--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish, Peter Stefanov, Micha\u00ebl Aupetit, and Preslav Nakov. 2020. Unsupervised user stance de- tection on twitter. In Proceedings of the Interna- tional AAAI Conference on Web and Social Media, volume 14, pages 141-152.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Assessing viewpoint diversity in search results using ranking fairness metrics",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Draws",
"suffix": ""
},
{
"first": "Nava",
"middle": [],
"last": "Tintarev",
"suffix": ""
},
{
"first": "Ujwal",
"middle": [],
"last": "Gadiraju",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "23",
"issue": "1",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Draws, Nava Tintarev, and Ujwal Gadiraju. 2021. Assessing viewpoint diversity in search results us- ing ranking fairness metrics. ACM SIGKDD Explo- rations Newsletter, 23(1):50-58.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The stance triangle. Stancetaking in discourse: Subjectivity, evaluation, interaction",
"authors": [
{
"first": "John W Du",
"middle": [],
"last": "Bois",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "164",
"issue": "",
"pages": "139--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John W Du Bois. 2007. The stance triangle. Stanc- etaking in discourse: Subjectivity, evaluation, inter- action, 164(3):139-182.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Emergent: a novel data-set for stance classification",
"authors": [
{
"first": "William",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "1163--1168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Ferreira and Andreas Vlachos. 2016. Emer- gent: a novel data-set for stance classification. In Proceedings of the 2016 conference of the North American chapter of the association for computa- tional linguistics: Human language technologies, pages 1163-1168.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Offspring from reproduction problems: What replication failure teaches us",
"authors": [
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Van Erp",
"suffix": ""
},
{
"first": "Marten",
"middle": [],
"last": "Postma",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1691--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antske Fokkens, Marieke Van Erp, Marten Postma, Ted Pedersen, Piek Vossen, and Nuno Freire. 2013. Offspring from reproduction problems: What repli- cation failure teaches us. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1691-1701.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A retrospective analysis of the fake news challenge stance-detection task",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Avinesh",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Caspelherr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1859--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, PVS Avinesh, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news chal- lenge stance-detection task. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1859-1874.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Preslav Nakov, and Isabelle Augenstein. 2021. A survey on stance detection for mis-and disinformation identification",
"authors": [
{
"first": "Momchil",
"middle": [],
"last": "Hardalov",
"suffix": ""
},
{
"first": "Arnav",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.00242"
]
},
"num": null,
"urls": [],
"raw_text": "Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. A survey on stance detec- tion for mis-and disinformation identification. arXiv preprint arXiv:2103.00242.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Covidlies: Detecting covid-19 misinformation on social media",
"authors": [
{
"first": "Tamanna",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Robert L Logan",
"suffix": ""
},
{
"first": "Arjuna",
"middle": [],
"last": "Ugarte",
"suffix": ""
},
{
"first": "Yoshitomo",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Workshop on NLP for COVID-19",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tamanna Hossain, Robert L Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. Covidlies: Detecting covid-19 misin- formation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Vocabulary choice as an indicator of perspective",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Beigman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Diermeier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 conference short papers",
"volume": "",
"issue": "",
"pages": "253--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2010. Vocabulary choice as an indicator of perspective. In Proceedings of the ACL 2010 con- ference short papers, pages 253-257.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stance detection: A survey",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "K\u00fc\u00e7\u00fck",
"suffix": ""
},
{
"first": "Fazli",
"middle": [],
"last": "Can",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Comput. Surv",
"volume": "53",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3369026"
]
},
"num": null,
"urls": [],
"raw_text": "Dilek K\u00fc\u00e7\u00fck and Fazli Can. 2020. Stance detection: A survey. ACM Comput. Surv., 53(1).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SemEval-2016 task 6: Detecting stance in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "31--41",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1003"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Same Side Stance Classification Task: Facilitating Argument Stance Classification by Fine-tuning a BERT Model",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Ollinger",
"suffix": ""
},
{
"first": "Lorik",
"middle": [],
"last": "Dumani",
"suffix": ""
},
{
"first": "Premtim",
"middle": [],
"last": "Sahitaj",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schenkel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.11163[cs].ArXiv:2004.11163"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Ollinger, Lorik Dumani, Premtim Sahitaj, Ralph Bergmann, and Ralf Schenkel. 2020. Same Side Stance Classification Task: Facilitating Argument Stance Classification by Fine-tuning a BERT Model. arXiv:2004.11163 [cs]. ArXiv: 2004.11163.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classification and Clustering of Arguments with Contextualized Word Embeddings",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "567--578",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1054"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and Clustering of Arguments with Contextualized Word Embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 567- 578, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "No NLP task should be an island: Multidisciplinarity for diversity in news recommender systems",
"authors": [
{
"first": "Myrthe",
"middle": [],
"last": "Reuver",
"suffix": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation",
"volume": "",
"issue": "",
"pages": "45--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myrthe Reuver, Antske Fokkens, and Suzan Verberne. 2021. No NLP task should be an island: Multi- disciplinarity for diversity in news recommender sys- tems. In Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Re- port Generation, pages 45-55, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A simple but tough-to-beat baseline for the fake news challenge stance detection task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Georgios",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.03264"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Riedel, Isabelle Augenstein, Georgios P Sp- ithourakis, and Sebastian Riedel. 2017. A sim- ple but tough-to-beat baseline for the fake news challenge stance detection task. arXiv preprint arXiv:1707.03264.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Stance Detection Benchmark: How Robust is Your Stance Detection? KI -K\u00fcnstliche Intelligenz",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s13218-021-00714-w"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Stance Detection Benchmark: How Robust is Your Stance Detection? KI -K\u00fcn- stliche Intelligenz.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Targeting the benchmark: On methodology in current natural language processing research",
"authors": [
{
"first": "David",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "670--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Schlangen. 2021. Targeting the benchmark: On methodology in current natural language processing research. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing (Volume 2: Short Papers), pages 670-674, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Will it blend? blending weak and strong labeled data in a neural network for argumentation mining",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Shnarch",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Alzate",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Dankin",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Gleize",
"suffix": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "599--605",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2095"
]
},
"num": null,
"urls": [],
"raw_text": "Eyal Shnarch, Carlos Alzate, Lena Dankin, Mar- tin Gleize, Yufang Hou, Leshem Choshen, Ranit Aharonov, and Noam Slonim. 2018. Will it blend? blending weak and strong labeled data in a neu- ral network for argumentation mining. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 599-605, Melbourne, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Recognizing stances in online debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "226--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing stances in online debates. In Proceed- ings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 226-234, Suntec, Singapore. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Crosstopic argument mining from heterogeneous sources",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3664--3674",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1402"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross- topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3664-3674, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "AG Semantic Computing, and Henning Wachsmuth. 2021. Same side stance classification",
"authors": [
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Roxanne",
"middle": [
"El"
],
"last": "Baff",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benno Stein, Yamen Ajjour, Roxanne El Baff, Khalid Al-Khatib, Philipp Cimiano, AG Semantic Comput- ing, and Henning Wachsmuth. 2021. Same side stance classification. Preprint.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP '06",
"volume": "",
"issue": "",
"pages": "327--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceed- ings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP '06, page 327-335, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Spurious correlations in crosstopic argument mining",
"authors": [
{
"first": "Terne",
"middle": [],
"last": "Sasha Thorn Jakobsen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "263--277",
"other_ids": {
"DOI": [
"10.18653/v1/2021.starsem-1.25"
]
},
"num": null,
"urls": [],
"raw_text": "Terne Sasha Thorn Jakobsen, Maria Barrett, and An- ders S\u00f8gaard. 2021. Spurious correlations in cross- topic argument mining. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 263-277, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Modeling transferable topics for cross-target stance detection",
"authors": [
{
"first": "Penghui",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Wenji",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1173--1176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penghui Wei and Wenji Mao. 2019. Modeling trans- ferable topics for cross-target stance detection. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, pages 1173-1176.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Hierarchical viewpoint discovery from tweets using bayesian modelling. Expert Systems with Applications",
"authors": [
{
"first": "Lixing",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Deyu",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "116",
"issue": "",
"pages": "430--438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lixing Zhu, Yulan He, and Deyu Zhou. 2019. Hi- erarchical viewpoint discovery from tweets using bayesian modelling. Expert Systems with Applica- tions, 116:430-438.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "BERT-base's performance on different topics plotted in a boxplot, with on the y-axis the F1 score of the 4 categories plotted on the x-axis: 'pro', 'con', 'no argument', and overall. A longer boxplots means more variability over seeds in score. The red line represent the averaged F1 score of the same model (BERT-base), presented as model performance inReimers et al. (2019).",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>Model</td></tr></table>",
"num": null,
"text": "Distribution of the UKP data over topics and over training (70%), test (20%), and validation (10%) sets. BERT-large -5 evenly performing seeds .636 (.007) .532 (.014) .578 (.016) .515 (.016) .567(.022)",
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>held-out</td><td>abortion</td><td>cloning</td><td>death</td><td>gun</td><td>marijuana</td><td>minimum</td><td>nuclear</td><td>school</td></tr><tr><td>topic</td><td/><td/><td>penalty</td><td>control</td><td>legalization</td><td>wage</td><td>energy</td><td>uniform</td></tr><tr><td>SVM+tf-idf</td><td>.463</td><td>.585</td><td>.482</td><td>.515</td><td>.323</td><td>.615</td><td>.598</td><td>.576</td></tr><tr><td>BERT-base</td><td>.533 (+.070</td><td>+.108</td><td>+.080</td><td>+.028</td><td>+.283</td><td>+.055</td><td>+.0850</td><td>+.102</td></tr></table>",
"num": null,
"text": ".011) .693 (.013) .562 (.012) .530 (.013) .607 (.016) .670 (.009) .660 (.011) .678 (.016) diff.",
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>all topics</td><td/><td/><td/><td>abortion topic</td><td/><td/><td/></tr><tr><td>Pro (vs Con)</td><td>Con (vs Pro)</td><td colspan=\"2\">No Argument</td><td>Pro (vs Con)</td><td>Con (vs Pro)</td><td colspan=\"2\">No Argument</td></tr><tr><td/><td/><td>vs Pro</td><td>vs Con</td><td/><td/><td>vs Pro</td><td>vs Con</td></tr><tr><td>pejorative</td><td>morality</td><td>basic</td><td>pronounced</td><td>seek</td><td>babies</td><td>way</td><td>anti</td></tr><tr><td>pronounced</td><td>format</td><td>section</td><td>threatens</td><td>illegal</td><td>abortion</td><td>against</td><td>ways</td></tr><tr><td>activity</td><td>bill</td><td>take</td><td>additional</td><td>reproductive</td><td>life</td><td>we</td><td>over</td></tr><tr><td>relations</td><td>workshop</td><td>robert</td><td>revolt</td><td>simply</td><td>conception</td><td>side</td><td>always</td></tr><tr><td>additional</td><td>workers</td><td>introduced</td><td>now</td><td>humane</td><td>simply</td><td>justify</td><td>thing</td></tr><tr><td colspan=\"2\">unexceptional sources</td><td colspan=\"2\">unquestioned proper</td><td>bear</td><td>risks</td><td colspan=\"2\">experience question</td></tr><tr><td>threatens</td><td>philosophical</td><td>revolt</td><td>typical</td><td>lifers</td><td>abortions</td><td>held</td><td>performed</td></tr><tr><td>variable</td><td colspan=\"2\">coincidentally scientifically</td><td>mentor</td><td>mother</td><td colspan=\"2\">complications tell</td><td>debate</td></tr><tr><td>39th</td><td>statutes</td><td>lifenews</td><td>sharing</td><td>healthy</td><td>birth</td><td>single</td><td>illegal</td></tr><tr><td>where</td><td>phrases</td><td>individuals</td><td>denuded</td><td>lives</td><td>kill</td><td>had</td><td>equal</td></tr></table>",
"num": null,
"text": "of this debate.",
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table/>",
"num": null,
"text": "Misclassifications on political topics with considerable 'pro' and 'con' confusion",
"type_str": "table",
"html": null
}
}
}
}