{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:27.989367Z" }, "title": "Synthetic Examples Improve Cross-Target Generalization: A Study on Stance Detection on a Twitter Corpus", "authors": [ { "first": "Costanza", "middle": [], "last": "Conforti", "suffix": "", "affiliation": { "laboratory": "Language Technology Lab", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Jakob", "middle": [], "last": "Berndt", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Mohammad", "middle": [ "Taher" ], "last": "Pilehvar", "suffix": "", "affiliation": { "laboratory": "Language Technology Lab", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Chryssi", "middle": [], "last": "Giannitsarou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Flavio", "middle": [], "last": "Toxvaerd", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Nigel", "middle": [], "last": "Collier", "suffix": "", "affiliation": { "laboratory": "Language Technology Lab", "institution": "University of Cambridge", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Cross-target generalization is a known problem in stance detection (SD), where systems tend to perform poorly when exposed to targets unseen during training. Given that data annotation is expensive and time-consuming, finding ways to leverage abundant unlabeled in-domain data can offer great benefits. In this paper, we apply a weakly supervised framework to enhance cross-target generalization through synthetically annotated data. We focus on Twitter SD and show experimentally that integrating synthetic data is helpful for cross-target generalization, leading to significant improvements in performance, with gains in F 1 scores ranging from +3.4 to +5.1.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Cross-target generalization is a known problem in stance detection (SD), where systems tend to perform poorly when exposed to targets unseen during training. Given that data annotation is expensive and time-consuming, finding ways to leverage abundant unlabeled in-domain data can offer great benefits. In this paper, we apply a weakly supervised framework to enhance cross-target generalization through synthetically annotated data. We focus on Twitter SD and show experimentally that integrating synthetic data is helpful for cross-target generalization, leading to significant improvements in performance, with gains in F 1 scores ranging from +3.4 to +5.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Stance Detection (SD) is a widely investigated task (Mohammad et al., 2017) , which constitutes an important component of many complex NLP problems, ranging from fake news detection to rumour verification (Vlachos and Riedel, 2014; Baly et al., 2018; Zubiaga et al., 2018b) . Since from early works (Agrawal et al.) , research on SD focused on user-generated content, ranging from blogs and commenting sections on websites (Hercig et al.) , to Reddit or Facebook posts (Klenner et al.) and, above all, Twitter data (Inkpen et al., 2017; Zubiaga et al., 2018a) .", "cite_spans": [ { "start": 52, "end": 75, "text": "(Mohammad et al., 2017)", "ref_id": "BIBREF23" }, { "start": 205, "end": 231, "text": "(Vlachos and Riedel, 2014;", "ref_id": "BIBREF34" }, { "start": 232, "end": 250, "text": "Baly et al., 2018;", "ref_id": "BIBREF3" }, { "start": 251, "end": 273, "text": "Zubiaga et al., 2018b)", "ref_id": "BIBREF37" }, { "start": 299, "end": 315, "text": "(Agrawal et al.)", "ref_id": null }, { "start": 423, "end": 438, "text": "(Hercig et al.)", "ref_id": null }, { "start": 469, "end": 485, "text": "(Klenner et al.)", "ref_id": null }, { "start": 515, "end": 536, "text": "(Inkpen et al., 2017;", "ref_id": "BIBREF16" }, { "start": 537, "end": 559, "text": "Zubiaga et al., 2018a)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, Conforti et al. (2020) released Will-They-Won't-They (WT-WT), a very large corpus of stance-annotated tweets discussing five US mergers and acquisitions (M&A) operations spanning over two industries: healthcare and entertainment. M&A is a general term that refers to the process in which the ownership of companies are transferred. Such process has many stages that range from informal talks to the closing of a deal, and discussions may not be publicly disclosed until a formal agreement is signed (Bruner and Perella, 2004) : in this sense, the analysis of the evolution of opinions and concerns expressed by users about a possible M&A operation, from early stage discussion to the signing of the merger (or its rejection), is a process similar to rumor verification, a widely studied field (Zubiaga et al., 2018a) . Interestingly, Conforti et al. (2020) observed a consistent drop in performance when a system trained on mergers in one industry is tested on data discussing a merger in a different industry. Such a performance drop when testing conditions deviate from training conditions is a known problem in Stance Detection (SD) (Aker et al.) .", "cite_spans": [ { "start": 10, "end": 32, "text": "Conforti et al. (2020)", "ref_id": "BIBREF7" }, { "start": 509, "end": 535, "text": "(Bruner and Perella, 2004)", "ref_id": "BIBREF5" }, { "start": 803, "end": 826, "text": "(Zubiaga et al., 2018a)", "ref_id": "BIBREF35" }, { "start": 844, "end": 866, "text": "Conforti et al. (2020)", "ref_id": "BIBREF7" }, { "start": 1146, "end": 1159, "text": "(Aker et al.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate the impact of using synthetically annotated data to improve zero-shot cross-target generalization in Twitter SD:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) We investigate a weakly supervised framework for SD, which integrates synthetically annotated data to improve performance on new targets; as to our knowledge, we are the first to use synthetically annotated data for SD;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) We test our framework on Twitter SD and prove that it successfully improves cross-target generalization on new, unseen targets;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) We extend the WT-WT corpus with additional annotated tweets discussing M&A operations in one additional domain, which we release for future research on cross-target generalization 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given an in-domain (ID) test set and a gold out-ofdomain (OOD) train set, we augment the corpus with synthetically labeled ID data ( Figure 1 ): 1. We train a SD system on the gold OOD data. 2. We crawl for a large amount of unlabeled ID data and label it with the system trained in 1, obtaining silver, synthetically annotated data. 3. We train a new system on both gold OOD and synthetic ID data: in this way, the system is exposed to a gold signal from the OOD data and to a noisy but ID signal from silver data. 4. We predict the ID test data with the system trained in 3.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 141, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Cross-Target Generalization with Synthetically Annotated Samples", "sec_num": "2" }, { "text": "Comparison with previous work on Data Augmentation and Domain Adaptation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Target Generalization with Synthetically Annotated Samples", "sec_num": "2" }, { "text": "Note that this framework differs from data augmentation (DAug) strategies adopted to supply for small training data, like in question answering (Kafle et al.) , machine translation (Fadaee et al.) distillation (Tang et al., 2019) , or for adversarial sample generation (Jia and Liang, 2017) . Such techniques, inspired by DAug in speech recognition and computer vision (Chatfield et al., 2014) , work by deformating gold samples to generate new artificial samples (for example, by random token masking, or POS-or semantics-based token replacement). Our approach differs in a number of aspects:", "cite_spans": [ { "start": 144, "end": 158, "text": "(Kafle et al.)", "ref_id": null }, { "start": 181, "end": 196, "text": "(Fadaee et al.)", "ref_id": null }, { "start": 210, "end": 229, "text": "(Tang et al., 2019)", "ref_id": "BIBREF33" }, { "start": 269, "end": 290, "text": "(Jia and Liang, 2017)", "ref_id": "BIBREF17" }, { "start": 369, "end": 393, "text": "(Chatfield et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Target Generalization with Synthetically Annotated Samples", "sec_num": "2" }, { "text": "1. In DAug the goal is to enlarge a set of initial ID data; here, we assume we don't have any ID training data, but only OOD; 2. For this reason, while DAug helps to cope with data sparsity, our approach is also useful for domain shifts; 3. In DAug, sample generation might introduce two kinds of noise: it can lead to mismatches between the new samples and the associated labels, and also produce ungrammatical samples; in our approach, the system is always exposed to well-structured input: the only noise are potential errors in synthetic labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Target Generalization with Synthetically Annotated Samples", "sec_num": "2" }, { "text": "Our approach fits into the broad family of weaklyand semi-supervised frameworks which have been adopted to tackle domain adaptation (DAda) problems (S\u00f8gaard, 2013) . In recent literature, such methods have been applied with mixed success to many tasks, ranging from named entity recognition (Fries et al., 2017) to relation extraction (Mintz et al., 2009) , tagging (Plank et al., 2014 ), parsing (McClosky et al., 2010 , and sentiment analysis (Blitzer et al., 2007; Ruder and Plank, 2018; Ratner et al., 2020) . In this paper, we propose to apply weakly supervision to SD, by adopting the extremely simple and inexpensive framework described above.", "cite_spans": [ { "start": 148, "end": 163, "text": "(S\u00f8gaard, 2013)", "ref_id": "BIBREF32" }, { "start": 291, "end": 311, "text": "(Fries et al., 2017)", "ref_id": "BIBREF11" }, { "start": 335, "end": 355, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF22" }, { "start": 366, "end": 385, "text": "(Plank et al., 2014", "ref_id": "BIBREF24" }, { "start": 386, "end": 419, "text": "), parsing (McClosky et al., 2010", "ref_id": null }, { "start": 445, "end": 467, "text": "(Blitzer et al., 2007;", "ref_id": "BIBREF4" }, { "start": 468, "end": 490, "text": "Ruder and Plank, 2018;", "ref_id": "BIBREF28" }, { "start": 491, "end": 511, "text": "Ratner et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Target Generalization with Synthetically Annotated Samples", "sec_num": "2" }, { "text": "SD is a widely investigated field in NLP. Starting from Mohammad et al. 2017, research in SD focused on the analysis of Twitter posts. Another research direction explored the classification of Twitter users with respect to given topics, like political independence (Darwish et al., 2019) . Work on other types of user-generated data includes SD on parenting blogs , political posts on newspapers websites (Hanselowski et al., 2018) , posts on online debate forums on various topics (Hasan and Ng, 2014) and posts on wordpress blogs (Simaki et al., 2017) . SD has been also integrated into Fake News Detection (Pomerleau and Rao, 2017) and constitutes an important step in the rumor verification pipeline (Zubiaga et al., 2018b) : in this framework, popular shared tasks focused on SD of rumorous tweets (Gorrell et al., 2018) and Reddit posts (Gorrell et al., 2018) . These works analyze tweets in a tree-shaped stream (Zubiaga et al., 2015) . Note that SD constitutes a related but different task than sentiment analysis (Mohammad et al., 2017) : the latter focuses on the polarity expressed w.r.t. a topic, while the former aims to determine the text's orientation w.r.t. the topic. Consider the following tweet:", "cite_spans": [ { "start": 265, "end": 287, "text": "(Darwish et al., 2019)", "ref_id": "BIBREF8" }, { "start": 405, "end": 431, "text": "(Hanselowski et al., 2018)", "ref_id": "BIBREF13" }, { "start": 532, "end": 553, "text": "(Simaki et al., 2017)", "ref_id": "BIBREF30" }, { "start": 609, "end": 634, "text": "(Pomerleau and Rao, 2017)", "ref_id": "BIBREF25" }, { "start": 704, "end": 727, "text": "(Zubiaga et al., 2018b)", "ref_id": "BIBREF37" }, { "start": 803, "end": 825, "text": "(Gorrell et al., 2018)", "ref_id": "BIBREF12" }, { "start": 843, "end": 865, "text": "(Gorrell et al., 2018)", "ref_id": "BIBREF12" }, { "start": 919, "end": 941, "text": "(Zubiaga et al., 2015)", "ref_id": "BIBREF36" }, { "start": 1022, "end": 1045, "text": "(Mohammad et al., 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Stance Detection", "sec_num": "3" }, { "text": "\u2022 #Cancer patients will suffer if CVSHealth buys Aetna CVS #PBM has resulted in delays in therapy, switches, etc all documented. Terrible! The sentiment of the tweet w.r.t. the target is negative: the user believes that the merger would harm patients; however, its stance is comment, as it is not stating that the merger is going to happen or to be rejected, but is talking about its consequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work on Stance Detection", "sec_num": "3" }, { "text": "Data. We consider the following data (Table 1 -2):", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "(Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "\u2022 Annotated data. The WT-WT corpus constitutes our primary source of labeled data, which we extend with gold-annotated tweets discussing a merger in the defense industry, following the same procedure as in Conforti et al. (2020) . Each {tweet, merger} sample is annotated with a label from support, comment, refute and unrelated, which expresses its stance w.r.t the likelihood of the merger to happen. \u2022 Unlabeled data. We crawl for 16 additional mergers, obtaining 134,922 unlabeled tweets. We consider 3 healthcare mergers as gold train data (AET HUM, ANTM CI, CVS AET) and 3 test sets: CI ESRX (healthcare, ID), DIS FOXA (entertainment, OOD) and UTX COL (defense, OOD).", "cite_spans": [ { "start": 206, "end": 228, "text": "Conforti et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Models and Hyperparameters. We employ a multi-layer perceptron (MLP) classifier, which takes as input the concatenation of the tweet's and the target's TF-IDF representations and their cosine similarity. This simple model achieved good results on SD (Riedel et al., 2017) and is relatively stable over parameter selection. Hyperparameters used are listed in Table 6 (Appendix B) for replication. Synthetic Label Generation. We train a system on the gold train set (total 30,367 samples). We use early stopping with a patience of 5 over the Table 3 : Results of SD on the three test sets (one ID and two OOD), when selecting synthetic data of different types; as recommended when dealing with unbalanced class distribution (Hanselowski et al., 2018) , we report on macro-averaged precision, recall and F 1 score; the last four columns report on single label accuracy. heldout data. The system achieved an F 1 score of 78.33 on the heldout data. Then, the unlabeled data is annotated using the trained system. The predicted label distribution reflects the actual merger output (Table 2 ). Refer to Table 5 (Appendix A) for qualitative examples of correctly and wrongly synthetically annotated samples.", "cite_spans": [ { "start": 250, "end": 271, "text": "(Riedel et al., 2017)", "ref_id": "BIBREF27" }, { "start": 722, "end": 748, "text": "(Hanselowski et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 358, "end": 365, "text": "Table 6", "ref_id": null }, { "start": 540, "end": 547, "text": "Table 3", "ref_id": null }, { "start": 1075, "end": 1083, "text": "(Table 2", "ref_id": "TABREF3" }, { "start": 1096, "end": 1103, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Baseline. Table 3 reports on results without using any synthetic data. As expected, we observe a notable gap in generalization performance between the ID healthcare test set and the OOD test sets.", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 17, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Discussion", "sec_num": "5" }, { "text": "Experiment I. To understand the impact of including different types of synthetic data during training, we consider three settings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Discussion", "sec_num": "5" }, { "text": "(1) related mergers: adding synthetic data from mergers which are ID w.r.t. the considered test set (we select ID mergers for each test set according to similarities between industries, see Appendix A);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Discussion", "sec_num": "5" }, { "text": "(2) succeeded mergers: adding data from mergers which were successful: such mergers tend to better match the distribution of the test mergers, as all of them succeeded;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Discussion", "sec_num": "5" }, { "text": "(3) all mergers: adding data from all synthetically annotated mergers: this last setting was implemented to test whether synthetically annotated data, even if not perfectly ID w.r.t. the testset, could have a positive regularization function beyond DA (as hypothesized by Sennrich et al. (2016) in the context of Machine Translation).", "cite_spans": [ { "start": 272, "end": 294, "text": "Sennrich et al. (2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Discussion", "sec_num": "5" }, { "text": "For experiments, we randomly add synthetic samples with a proportion of 50% w.r.t. the train set size; to account for uncertainty, we use sample weighting for synthetic samples: sup, ref and com are weighted 0.6, while unr are weighted 0.2 (after qualitative analysis, we found them to be noisier). Table 3 show that adding synthetic samples leads to improvements in generalization over OOD test sets in all considered settings (up to +3.4 in F 1 score for FOXA DIS and up to +5.1 for UTX COL; note that results on UTX COL without synthetic data were significantly lower than on FOXA DIS). This is in line with previous results on semi-supervised learning investigating other tasks, such as sentiment analysis (Blitzer et al., 2007) or text categorization (Ando and Zhang, 2005) . Interestingly, synthetic samples didn't bring any improvement to the ID test set; moreover, best results overall were obtained with the related merger setting: this seems to indicate that synthetic data act as a powerful domain adaptation technique rather than as a regularizer alone, this is in line with findings in Machine Translation (Edunov et al., 2018) .", "cite_spans": [ { "start": 710, "end": 732, "text": "(Blitzer et al., 2007)", "ref_id": "BIBREF4" }, { "start": 756, "end": 778, "text": "(Ando and Zhang, 2005)", "ref_id": "BIBREF2" }, { "start": 1119, "end": 1140, "text": "(Edunov et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 299, "end": 306, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Discussion", "sec_num": "5" }, { "text": "Experiment II. We consider the best performing setting, related mergers, and perform a second set of experiments to understand the impact of adding synthetic samples belonging to different stances; we consider: only unr; only com; only unr+com; sup+ref +com; sup+ref ; and finally adding samples from all stances. Differences in performance between settings are negligible (Table 4) . Concerning single labels, synthetic samples had the most significant impact on unr not only for OOD testsets (up to +39.7 in accuracy for FOXA DIS and +4.5 for UTX COL), but even for ID (+18.44). Experiment III. We run a final set of experiments to investigate the relation between performance and the amount of synthetic data considered. For both operations (Figure 2 ), we observe that improvements in F 1 score are supported by a rise in recall which reaches a pleateau around 30% and, for UTX COL, in precision.", "cite_spans": [], "ref_spans": [ { "start": 373, "end": 382, "text": "(Table 4)", "ref_id": "TABREF5" }, { "start": 744, "end": 753, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results in", "sec_num": null }, { "text": "We investigated an inexpensive framework to integrate unlabeled ID data to improve cross-target SD. We studied Twitter SD and showed, through a comprehensive set of experiments, that it is a promising strategy. We reserve to study its applicability to other domains in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "https://github.com/cambridge-wtwt/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their efforts and for the constructive suggestions. We gratefully acknowledge funding from the Keynes Fund, University of Cambridge (grant no. JHOQ). CC is grateful to NERC DREAM CDT (grant no. 1945246) for partially funding this work. CG and FT are thankful to the Cambridge Endowment for Research in Finance (CERF).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Appendix A: Details on Data Table 5 reports examples of correctly and wrongly synthetically labeled samples.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "For each test set, we include synthetically annotated tweets from a number of related mergers. Related mergers have been manually selected by an expert in the Economics domain, based on industry similarity.\u2022 CI ESRX (health): Preprocessing. We perform the following steps on all tweets: lowercasing, tokenization; digits/URL normalization; stripping of the # sign from hashtags; normalization of low-frequency users.Hyperparameters Specifications Hyperparameters are reported in Table 6 . When possible, we follow Riedel et al. (2017) for parameter selection. Note that we perform minimal parameter tuning: the goal of this paper is to investigate the efficacy of synthetically annotated data for SD, independently from the chosen architecture.batch size 32 epochs 70 optimizerAdam (\u03bb = 0.001) BoW vocabulary size 3000 dense hidden layer size 100 hidden layer dropout 0.2 ", "cite_spans": [ { "start": 514, "end": 534, "text": "Riedel et al. (2017)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 479, "end": 486, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Appendix B: Details on Modeling", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mining newsgroups using networks arising from social behavior", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Ramakrishnan", "middle": [], "last": "Sridhar Rajagopalan", "suffix": "" }, { "first": "Yirong", "middle": [], "last": "Srikant", "suffix": "" }, { "first": "", "middle": [], "last": "Xu", "suffix": "" } ], "year": null, "venue": "Proceedings of WWW 2003", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/775152.775227" ] }, "num": null, "urls": [], "raw_text": "Rakesh Agrawal, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. Mining newsgroups using networks arising from social behavior. In Proceed- ings of WWW 2003. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Stance classification in out-of-domain rumours: A case study around mental health disorders", "authors": [ { "first": "Ahmet", "middle": [], "last": "Aker", "suffix": "" }, { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Kolliakou", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" } ], "year": null, "venue": "Social Informatics -9th International Conference", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-319-67256-4_6" ] }, "num": null, "urls": [], "raw_text": "Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, Anna Kolliakou, Rob Procter, and Maria Liakata. Stance classification in out-of-domain rumours: A case study around mental health disorders. In Social Informatics -9th International Conference.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A framework for learning predictive structures from multiple tasks and unlabeled data", "authors": [ { "first": "Rie", "middle": [], "last": "Kubota", "suffix": "" }, { "first": "Ando", "middle": [], "last": "", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2005, "venue": "Journal of Machine Learning Research", "volume": "6", "issue": "", "pages": "1817--1853", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817-1853.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Integrating stance detection and fact checking in a unified corpus", "authors": [ { "first": "Ramy", "middle": [], "last": "Baly", "suffix": "" }, { "first": "Mitra", "middle": [], "last": "Mohtarami", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Glass", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramy Baly, Mitra Mohtarami, James R. Glass, Llu\u00eds M\u00e0rquez, Alessandro Moschitti, and Preslav Nakov. 2018. Integrating stance detection and fact checking in a unified corpus. In Proceedings of NAACL 2018.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classi- fication. In Proceedings of ACL 2007.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Applied mergers and acquisitions", "authors": [ { "first": "F", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Joseph", "middle": [ "R" ], "last": "Bruner", "suffix": "" }, { "first": "", "middle": [], "last": "Perella", "suffix": "" } ], "year": 2004, "venue": "", "volume": "173", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert F Bruner and Joseph R Perella. 2004. Applied mergers and acquisitions, volume 173. John Wiley & Sons.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Return of the devil in the details: Delving deep into convolutional nets", "authors": [ { "first": "Ken", "middle": [], "last": "Chatfield", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Vedaldi", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2014, "venue": "BMVC 2014", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Return of the devil in the details: Delving deep into convolutional nets. In BMVC 2014. BMVA Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Will-they-won't-they: A very large dataset for stance detection on twitter", "authors": [ { "first": "Costanza", "middle": [], "last": "Conforti", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Berndt", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Chryssi", "middle": [], "last": "Giannitsarou", "suffix": "" }, { "first": "Flavio", "middle": [], "last": "Toxvaerd", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2020, "venue": "Proceedings of ACL 2020", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.157" ] }, "num": null, "urls": [], "raw_text": "Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Will-they-won't-they: A very large dataset for stance detection on twitter. In Proceedings of ACL 2020.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised user stance detection on twitter", "authors": [ { "first": "Kareem", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Stefanov", "suffix": "" }, { "first": "J", "middle": [], "last": "Micha\u00ebl", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Aupetit", "suffix": "" }, { "first": "", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kareem Darwish, Peter Stefanov, Micha\u00ebl J. Aupetit, and Preslav Nakov. 2019. Unsupervised user stance detection on twitter. CoRR, abs/1904.02000.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Understanding back-translation at scale", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2018, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of EMNLP 2018.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Data augmentation for low-resource neural machine translation", "authors": [ { "first": "Marzieh", "middle": [], "last": "Fadaee", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": null, "venue": "Proceedings of ACL 2017", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-2090" ] }, "num": null, "urls": [], "raw_text": "Marzieh Fadaee, Arianna Bisazza, and Christof Monz. Data augmentation for low-resource neural machine translation. In Proceedings of ACL 2017.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Swellshark: A generative model for biomedical named entity recognition without labeled data", "authors": [ { "first": "Jason", "middle": [ "A" ], "last": "Fries", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason A. Fries, Sen Wu, Alexander Ratner, and Christo- pher R\u00e9. 2017. Swellshark: A generative model for biomedical named entity recognition without la- beled data. CoRR, abs/1704.06360.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rumoureval 2019: Determining rumour veracity and support for rumours. CoRR", "authors": [ { "first": "Genevieve", "middle": [], "last": "Gorrell", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Genevieve Gorrell, Kalina Bontcheva, Leon Derczyn- ski, Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. 2018. Rumoureval 2019: Determining rumour veracity and support for rumours. CoRR, abs/1809.06683.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A retrospective analysis of the fake news challenge stance-detection task", "authors": [ { "first": "Andreas", "middle": [], "last": "Hanselowski", "suffix": "" }, { "first": "P", "middle": [], "last": "Avinesh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Caspelherr", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Hanselowski, Avinesh P., Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news challenge stance-detection task. In Proceedings of COLING 2018.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Why are you taking this stance? identifying and classifying reasons in ideological debates", "authors": [ { "first": "Saidul", "middle": [], "last": "Kazi", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP 2014", "volume": "", "issue": "", "pages": "751--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? identifying and classifying reasons in ideological debates. In Proceedings of EMNLP 2014, pages 751-762.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Detecting stance in czech news commentaries", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Hercig", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Krejzl", "suffix": "" }, { "first": "Barbora", "middle": [], "last": "Hourov\u00e1", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Steinberger", "suffix": "" }, { "first": "Ladislav", "middle": [], "last": "Lenc", "suffix": "" } ], "year": 2017, "venue": "Proceedings of SloNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Hercig, Peter Krejzl, Barbora Hourov\u00e1, Josef Steinberger, and Ladislav Lenc. Detecting stance in czech news commentaries. In Proceedings of SloNLP 2017.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A dataset for multi-target stance detection", "authors": [ { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "551--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana Inkpen, Xiaodan Zhu, and Parinaz Sobhani. 2017. A dataset for multi-target stance detection. In Proceedings of EACL 2017, pages 551-557. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adversarial examples for evaluating reading comprehension systems", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2021--2031", "other_ids": { "DOI": [ "10.18653/v1/D17-1" ] }, "num": null, "urls": [], "raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of EMNLP 2017, pages 2021-2031, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Data augmentation for visual question answering", "authors": [ { "first": "Kushal", "middle": [], "last": "Kafle", "suffix": "" }, { "first": "Mohammed", "middle": [ "A" ], "last": "Yousefhussien", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Kanan", "suffix": "" } ], "year": null, "venue": "Proceedings of INLG 2017", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kushal Kafle, Mohammed A. Yousefhussien, and Christopher Kanan. Data augmentation for visual question answering. In Proceedings of INLG 2017.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Stance detection in facebook posts of a german right-wing party", "authors": [ { "first": "Manfred", "middle": [], "last": "Klenner", "suffix": "" }, { "first": "Don", "middle": [], "last": "Tuggener", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Clematide", "suffix": "" } ], "year": 2017, "venue": "Proceedings of LSDSem@EACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/w17-0904" ] }, "num": null, "urls": [], "raw_text": "Manfred Klenner, Don Tuggener, and Simon Clematide. Stance detection in facebook posts of a german right-wing party. In Proceedings of LSDSem@EACL 2017.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "When is self-training effective for parsing", "authors": [ { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": null, "venue": "Proceedings of COLING2008", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David McClosky, Eugene Charniak, and Mark John- son. When is self-training effective for parsing? In Proceedings of COLING2008.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Automatic domain adaptation for parsing", "authors": [ { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David McClosky, Eugene Charniak, and Mark John- son. 2010. Automatic domain adaptation for pars- ing. In Proceedings of NAACL-HLT 2010.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL 2009. The Association for Computer Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of ACL 2009. The Association for Computer Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stance and sentiment in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2017, "venue": "ACM Trans. Internet Techn", "volume": "17", "issue": "3", "pages": "", "other_ids": { "DOI": [ "10.1145/3003433" ] }, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Trans. Internet Techn., 17(3):26:1-26:23.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Adapting taggers to twitter with not-so-distant supervision", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Ryan", "middle": [ "T" ], "last": "Mcdonald", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COL-ING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank, Dirk Hovy, Ryan T. McDonald, and An- ders S\u00f8gaard. 2014. Adapting taggers to twitter with not-so-distant supervision. In Proceedings of COL- ING 2014.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Fake news challenge", "authors": [ { "first": "Dean", "middle": [], "last": "Pomerleau", "suffix": "" }, { "first": "Delip", "middle": [], "last": "Rao", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dean Pomerleau and Delip Rao. 2017. Fake news chal- lenge.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Snorkel: rapid training data creation with weak supervision", "authors": [ { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "H", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Henry", "middle": [ "R" ], "last": "Bach", "suffix": "" }, { "first": "Jason", "middle": [ "A" ], "last": "Ehrenberg", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Fries", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/s00778-019-00552-1" ] }, "num": null, "urls": [], "raw_text": "Alexander Ratner, Stephen H. Bach, Henry R. Ehren- berg, Jason A. Fries, Sen Wu, and Christopher R\u00e9. 2020. Snorkel: rapid training data creation with weak supervision. VLDB J.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A simple but tough-to-beat baseline for the fake news challenge stance detection task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Georgios", "middle": [ "P" ], "last": "Spithourakis", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Riedel, Isabelle Augenstein, Georgios P. Sp- ithourakis, and Sebastian Riedel. 2017. A simple but tough-to-beat baseline for the fake news challenge stance detection task. CoRR, abs/1707.03264.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Strong baselines for neural semi-supervised learning under domain shift", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of ACL 2018.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of ACL 2016.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Annotating speaker stance in discourse: the brexit blog corpus", "authors": [ { "first": "Vasiliki", "middle": [], "last": "Simaki", "suffix": "" }, { "first": "Carita", "middle": [], "last": "Paradis", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Skeppstedt", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "" }, { "first": "Kostiantyn", "middle": [], "last": "Kucher", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kerren", "suffix": "" } ], "year": 2017, "venue": "Corpus Linguistics and Linguistic Theory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasiliki Simaki, Carita Paradis, Maria Skeppstedt, Magnus Sahlgren, Kostiantyn Kucher, and Andreas Kerren. 2017. Annotating speaker stance in dis- course: the brexit blog corpus. Corpus Linguistics and Linguistic Theory.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automatic detection of stance towards vaccination in online discussion forums", "authors": [ { "first": "Maria", "middle": [], "last": "Skeppstedt", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kerren", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2017, "venue": "Proceedings of DDDSM@IJCNLP 2017", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Skeppstedt, Andreas Kerren, and Manfred Stede. 2017. Automatic detection of stance towards vacci- nation in online discussion forums. In Proceedings of DDDSM@IJCNLP 2017, pages 1-8.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Semi-Supervised Learning and Domain Adaptation in Natural Language Processing", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2013, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.2200/S00497ED1V01Y201304HLT021" ] }, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2013. Semi-Supervised Learning and Domain Adaptation in Natural Language Process- ing. Synthesis Lectures on Human Language Tech- nologies. Morgan & Claypool Publishers.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Distilling taskspecific knowledge from BERT into simple neural networks", "authors": [ { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Linqing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task- specific knowledge from BERT into simple neural networks. CoRR, abs/1903.12136.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Fact checking: Task definition and dataset construction", "authors": [ { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Workshop on Language Technologies and Computational Social Science@ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3115/v1/W14-2508" ] }, "num": null, "urls": [], "raw_text": "Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the Workshop on Language Tech- nologies and Computational Social Science@ACL 2014. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Detection and resolution of rumours in social media: A survey", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Ahmet", "middle": [], "last": "Aker", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" } ], "year": 2018, "venue": "ACM Comput. Surv", "volume": "51", "issue": "2", "pages": "", "other_ids": { "DOI": [ "10.1145/3161603" ] }, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018a. Detection and resolution of rumours in social media: A sur- vey. ACM Comput. Surv., 51(2):32:1-32:36.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Analysing how people orient to and spread rumours in social media by looking at conversational threads. CoRR", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Geraldine", "middle": [], "last": "Wong Sak", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Hoi", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Procter", "suffix": "" }, { "first": "", "middle": [], "last": "Tolmie", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Geraldine Wong Sak Hoi, Maria Liakata, Rob Procter, and Peter Tolmie. 2015. Analysing how people orient to and spread rumours in social media by looking at conversational threads. CoRR, abs/1511.07487.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Discourseaware rumour stance classification in social media using sequential classifiers", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Lukasik", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2018, "venue": "Inf. Process. Manage", "volume": "54", "issue": "2", "pages": "273--290", "other_ids": { "DOI": [ "10.1016/j.ipm.2017.11.009" ] }, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018b. Discourse- aware rumour stance classification in social media using sequential classifiers. Inf. Process. Manage., 54(2):273-290.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Pipeline of the framework. Rectangular boxes: gold annotations; cornered boxes: unlabeled/synthetic annotations; green lines: elements which are passed from different stages of the pipeline." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Performance of models trained on different amounts of synthetic data (percentage with respect to the train set size)." }, "TABREF1": { "num": null, "content": "", "html": null, "type_str": "table", "text": "" }, "TABREF3": { "num": null, "content": "
datasetsynth dataprecrecF1accSUPREFCOMUNR
CI ESRXnone56.8054.1954.9959.5271.1737.7062.4845.39
healthIDCI ESRX CI ESRXrelated merger succeeded merger52.24 52.0551.81 49.6052.12 49.5056.59 55.6463.52 62.4128.57 18.6553.82 53.6961.03 63.65
CI ESRXall merger53.9450.9450.9456.5963.2521.8353.9564.74
entertainOODDIS FOXA DIS FOXA DIS FOXA DIS FOXAnone related merger succeeded merger all merger39.61 39.34 38.80 40.9935.10 37.93 35.55 36.1634.55 37.69 35.92 36.9555.34 55.56 54.75 57.4283.36 60.38 46.86 54.9907.41 15.87 06.61 06.3514.43 17.12 16.34 12.8734.85 59.33 72.39 70.44
defenseOODUTX COL UTX COL UTX COL UTX COLnone related merger succeeded mergers all merger35.18 46.91 41.68 37.6227.16 38.98 29.19 28.5221.91 24.09 23.99 23.6744.02 45.54 45.73 44.9708.08 15.15 16.16 14.1400.00 00.00 00.00 00.0008.54 06.53 05.03 07.0492.00 94.22 95.56 92.89
", "html": null, "type_str": "table", "text": "" }, "TABREF5": { "num": null, "content": "", "html": null, "type_str": "table", "text": "Results of SD on the OOD test sets, selecting synthetic data annotated with different stances (3 rd col)." } } } }