{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:31.234205Z" }, "title": "Claim Detection in Biomedical Twitter Posts", "authors": [ { "first": "Amelie", "middle": [], "last": "W\u00fchrl", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "amelie.wuehrl@ims.uni-stuttgart.de" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "roman.klinger@ims.uni-stuttgart.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Social media contains unfiltered and unique information, which is potentially of great value, but, in the case of misinformation, can also do great harm. With regards to biomedical topics, false information can be particularly dangerous. Methods of automatic fact-checking and fake news detection address this problem, but have not been applied to the biomedical domain in social media yet. We aim to fill this research gap and annotate a corpus of 1200 tweets for implicit and explicit biomedical claims (the latter also with span annotations for the claim phrase). With this corpus, which we sample to be related to COVID-19, measles, cystic fibrosis, and depression, we develop baseline models which detect tweets that contain a claim automatically. Our analyses reveal that biomedical tweets are densely populated with claims (45 % in a corpus sampled to contain 1200 tweets focused on the domains mentioned above). Baseline classification experiments with embedding-based classifiers and BERT-based transfer learning demonstrate that the detection is challenging, however, shows acceptable performance for the identification of explicit expressions of claims. Implicit claim tweets are more challenging to detect.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Social media contains unfiltered and unique information, which is potentially of great value, but, in the case of misinformation, can also do great harm. With regards to biomedical topics, false information can be particularly dangerous. Methods of automatic fact-checking and fake news detection address this problem, but have not been applied to the biomedical domain in social media yet. We aim to fill this research gap and annotate a corpus of 1200 tweets for implicit and explicit biomedical claims (the latter also with span annotations for the claim phrase). With this corpus, which we sample to be related to COVID-19, measles, cystic fibrosis, and depression, we develop baseline models which detect tweets that contain a claim automatically. Our analyses reveal that biomedical tweets are densely populated with claims (45 % in a corpus sampled to contain 1200 tweets focused on the domains mentioned above). Baseline classification experiments with embedding-based classifiers and BERT-based transfer learning demonstrate that the detection is challenging, however, shows acceptable performance for the identification of explicit expressions of claims. Implicit claim tweets are more challenging to detect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Social media platforms like Twitter contain vast amounts of valuable and novel information, and biomedical aspects are no exception (Correia et al., 2020) . Doctors share insights from their everyday life, patients report on their experiences with particular medical conditions and drugs, or they discuss and hypothesize about the potential value of a treatment for a particular disease. This information can be of great value -governmental administrations or pharmaceutical companies can for instance learn about unknown side effects or potentially beneficial off-label use of medications. At the same time, unproven claims or even intentionally spread misinformation might also do great harm. Therefore, contextualizing a social media message and investigating if a statement is debated or can actually be proven with a reference to a reliable resource is important. The task of detecting such claims is essential in argument mining and a prerequisite in further analysis for tasks like factchecking or hypotheses generation. We show an example of a tweet with a claim in Figure 1 .", "cite_spans": [ { "start": 132, "end": 154, "text": "(Correia et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1074, "end": 1082, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Claims are widely considered the conclusive and therefore central part of an argument (Lippi and Torroni, 2015; , consequently making it the most valuable information to extract. Argument mining and claim detection has been explored for texts like legal documents, Wikipedia articles, essays (Moens et al., 2007; Stab and Gurevych, 2017, i.a.) , social media and web content (Goudas et al., 2014; Bosc et al., 2016a; Dusmanu et al., 2017, i.a.) . It has also been applied to scientific biomedical publications (Achakulvisut et al., 2019; Mayer et al., 2020, i.a.) , but biomedical arguments as they occur on social media, and particularly Twitter, have not been analyzed yet.", "cite_spans": [ { "start": 86, "end": 111, "text": "(Lippi and Torroni, 2015;", "ref_id": "BIBREF35" }, { "start": 292, "end": 312, "text": "(Moens et al., 2007;", "ref_id": "BIBREF41" }, { "start": 313, "end": 343, "text": "Stab and Gurevych, 2017, i.a.)", "ref_id": null }, { "start": 375, "end": 396, "text": "(Goudas et al., 2014;", "ref_id": "BIBREF21" }, { "start": 397, "end": 416, "text": "Bosc et al., 2016a;", "ref_id": "BIBREF8" }, { "start": 417, "end": 444, "text": "Dusmanu et al., 2017, i.a.)", "ref_id": null }, { "start": 510, "end": 537, "text": "(Achakulvisut et al., 2019;", "ref_id": "BIBREF1" }, { "start": 538, "end": 563, "text": "Mayer et al., 2020, i.a.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With this paper, we fill this gap and explore claim detection for tweets discussing biomedical topics, particularly tweets about COVID-19, the measles, cystic fibrosis, and depression, to allow for drawing conclusions across different fields.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions to a better understanding of biomedical claims made on Twitter are, (1), to publish the first biomedical Twitter corpus manually labeled with claims (distinguished in explicit and implicit, and with span annotations for explicit claim phrases), and (2), baseline experiments to detect (implicit and explicit) claim tweets in a classification setting. Further, (3), we find in a cross-corpus study that a generalization across domains is challenging and that biomedical tweets pose a particularly difficult environment for claim detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Detecting biomedical claims on Twitter is a task rooted in both the argument mining field as well as the area of biomedical text mining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Argumentation mining covers a variety of different domains, text, and discourse types. This includes online content, for instance Wikipedia Roitman et al., 2016; Lippi and Torroni, 2015) , but also more interaction-driven platforms, like fora. As an example, Habernal and Gurevych (2017) extract argument structures from blogs and forum posts, including comments. Apart from that, Twitter is generally a popular text source (Bosc et al., 2016a; Dusmanu et al., 2017) . Argument mining is also applied to professionally generated content, for instance news (Goudas et al., 2014; Sardianos et al., 2015) and legal or political documents (Moens et al., 2007; Palau and Moens, 2009; Mochales and Moens, 2011; Florou et al., 2013) . Another domain of interest are persuasive essays, which we also use in a cross-domain study in this paper (Lippi and Torroni, 2015; .", "cite_spans": [ { "start": 140, "end": 161, "text": "Roitman et al., 2016;", "ref_id": "BIBREF47" }, { "start": 162, "end": 186, "text": "Lippi and Torroni, 2015)", "ref_id": "BIBREF35" }, { "start": 424, "end": 444, "text": "(Bosc et al., 2016a;", "ref_id": "BIBREF8" }, { "start": 445, "end": 466, "text": "Dusmanu et al., 2017)", "ref_id": "BIBREF17" }, { "start": 556, "end": 577, "text": "(Goudas et al., 2014;", "ref_id": "BIBREF21" }, { "start": 578, "end": 601, "text": "Sardianos et al., 2015)", "ref_id": "BIBREF48" }, { "start": 635, "end": 655, "text": "(Moens et al., 2007;", "ref_id": "BIBREF41" }, { "start": 656, "end": 678, "text": "Palau and Moens, 2009;", "ref_id": "BIBREF44" }, { "start": 679, "end": 704, "text": "Mochales and Moens, 2011;", "ref_id": "BIBREF40" }, { "start": 705, "end": 725, "text": "Florou et al., 2013)", "ref_id": "BIBREF19" }, { "start": 834, "end": 859, "text": "(Lippi and Torroni, 2015;", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Argumentation Mining", "sec_num": "2.1" }, { "text": "Existing approaches differ with regards to which tasks in the broader argument mining pipeline they address. While some focus on the detection of arguments (Moens et al., 2007; Florou et al., 2013; Bosc et al., 2016a; Dusmanu et al., 2017; , others analyze the relational aspects between argument components (Mochales and Moens, 2011; .", "cite_spans": [ { "start": 156, "end": 176, "text": "(Moens et al., 2007;", "ref_id": "BIBREF41" }, { "start": 177, "end": 197, "text": "Florou et al., 2013;", "ref_id": "BIBREF19" }, { "start": 198, "end": 217, "text": "Bosc et al., 2016a;", "ref_id": "BIBREF8" }, { "start": 218, "end": 239, "text": "Dusmanu et al., 2017;", "ref_id": "BIBREF17" }, { "start": 308, "end": 334, "text": "(Mochales and Moens, 2011;", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Argumentation Mining", "sec_num": "2.1" }, { "text": "While most approaches cater to a specific domain or text genre, Stab et al. (2018) argue that domain-focused, specialized systems do not generalize to broader applications such as argument search in texts. In line with that, present a comparative study on crossdomain claim detection. They observe that diverse training data leads to a more robust model performance in unknown domains.", "cite_spans": [ { "start": 64, "end": 82, "text": "Stab et al. (2018)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Argumentation Mining", "sec_num": "2.1" }, { "text": "Claim detection is a central task in argumentation mining. It can be framed as a classification (Does a document/sentence contain a claim?) or as sequence labeling (Which tokens make up the claim?). The setting as classification has been explored, inter alia, as a retrieval task of online comments made by public stakeholders on pending governmental regulations (Kwon et al., 2007) , for sentence detection in essays, (Lippi and Torroni, 2015) , and for Wikipedia (Roitman et al., 2016; Levy et al., 2017) . The setting as a sequence labeling task has been tackled on Wikipedia , on Twitter, and on news articles (Goudas et al., 2014; Sardianos et al., 2015) .", "cite_spans": [ { "start": 363, "end": 382, "text": "(Kwon et al., 2007)", "ref_id": "BIBREF29" }, { "start": 419, "end": 444, "text": "(Lippi and Torroni, 2015)", "ref_id": "BIBREF35" }, { "start": 465, "end": 487, "text": "(Roitman et al., 2016;", "ref_id": "BIBREF47" }, { "start": 488, "end": 506, "text": "Levy et al., 2017)", "ref_id": "BIBREF33" }, { "start": 614, "end": 635, "text": "(Goudas et al., 2014;", "ref_id": "BIBREF21" }, { "start": 636, "end": 659, "text": "Sardianos et al., 2015)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Claim Detection", "sec_num": "2.2" }, { "text": "One common characteristic in most work on automatic claim detection is the focus on relatively formal text. Social media, like tweets, can be considered a more challenging text type, which despite this aspect, received considerable attention, also beyond classification or token sequence labeling. Bosc et al. (2016a) detect relations between arguments, Dusmanu et al. (2017) identify factual or opinionated tweets, and Addawood and Bashir (2016) further classify the type of premise which accompanies the claim. Ouertatani et al. (2020) combine aspects of sentiment detection, opinion, and argument mining in a pipeline to analyze argumentative tweets more comprehensively. Ma et al. (2018) specifically focus on the claim detection task in tweets, and present an approach to retrieve Twitter posts that contain argumentative claims about debatable political topics.", "cite_spans": [ { "start": 298, "end": 317, "text": "Bosc et al. (2016a)", "ref_id": "BIBREF8" }, { "start": 354, "end": 375, "text": "Dusmanu et al. (2017)", "ref_id": "BIBREF17" }, { "start": 420, "end": 446, "text": "Addawood and Bashir (2016)", "ref_id": "BIBREF2" }, { "start": 513, "end": 537, "text": "Ouertatani et al. (2020)", "ref_id": "BIBREF43" }, { "start": 675, "end": 691, "text": "Ma et al. (2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Claim Detection", "sec_num": "2.2" }, { "text": "To the best of our knowledge, detecting biomedical claims in tweets has not been approached yet. Biomedical argument mining, also for other text types, is generally still limited. The work by Shi and Bei (2019) is one of the few exceptions that target this challenge and propose a pipeline to extract health-related claims from headlines of healththemed news articles. The majority of other argument mining approaches for the biomedical domain focus on research literature (Blake, 2010; Alamri and Stevenson, 2015; Alamri and Stevensony, 2015; Achakulvisut et al., 2019; Mayer et al., 2020) .", "cite_spans": [ { "start": 192, "end": 210, "text": "Shi and Bei (2019)", "ref_id": "BIBREF51" }, { "start": 473, "end": 486, "text": "(Blake, 2010;", "ref_id": "BIBREF6" }, { "start": 487, "end": 514, "text": "Alamri and Stevenson, 2015;", "ref_id": "BIBREF4" }, { "start": 515, "end": 543, "text": "Alamri and Stevensony, 2015;", "ref_id": "BIBREF5" }, { "start": 544, "end": 570, "text": "Achakulvisut et al., 2019;", "ref_id": "BIBREF1" }, { "start": 571, "end": 590, "text": "Mayer et al., 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Claim Detection", "sec_num": "2.2" }, { "text": "Biomedical natural language processing (BioNLP) is a field in computational linguistics which also receives substantial attention from the bioinformat- ics community. One focus is on the automatic extraction of information from life science articles, including entity recognition, e.g., of diseases, drug names, protein and gene names (Habibi et al., 2017; Giorgi and Bader, 2018; Lee et al., 2019, i.a.) or relations between those (Lamurias et al., 2019; Sousa et al., 2021; Lin et al., 2019, i.a.) . Biomedical text mining methods have also been applied to social media texts and web content (Wegrzyn-Wolska et al., 2011; Yang et al., 2016; Sullivan et al., 2016, i.a.) . One focus is on the analysis of Twitter with regards to pharmacovigilance. Other topics include the extraction of adverse drug reactions (Nikfarjam et al., 2015; Cocos et al., 2017) , monitoring public health (Paul and Dredze, 2012; Choudhury et al., 2013; , and detecting personal health mentions (Yin et al., 2015; Karisani and Agichtein, 2018) .", "cite_spans": [ { "start": 335, "end": 356, "text": "(Habibi et al., 2017;", "ref_id": "BIBREF23" }, { "start": 357, "end": 380, "text": "Giorgi and Bader, 2018;", "ref_id": "BIBREF20" }, { "start": 381, "end": 404, "text": "Lee et al., 2019, i.a.)", "ref_id": null }, { "start": 432, "end": 455, "text": "(Lamurias et al., 2019;", "ref_id": "BIBREF30" }, { "start": 456, "end": 475, "text": "Sousa et al., 2021;", "ref_id": "BIBREF53" }, { "start": 476, "end": 499, "text": "Lin et al., 2019, i.a.)", "ref_id": null }, { "start": 594, "end": 623, "text": "(Wegrzyn-Wolska et al., 2011;", "ref_id": "BIBREF60" }, { "start": 624, "end": 642, "text": "Yang et al., 2016;", "ref_id": "BIBREF62" }, { "start": 643, "end": 671, "text": "Sullivan et al., 2016, i.a.)", "ref_id": null }, { "start": 811, "end": 835, "text": "(Nikfarjam et al., 2015;", "ref_id": "BIBREF42" }, { "start": 836, "end": 855, "text": "Cocos et al., 2017)", "ref_id": "BIBREF12" }, { "start": 883, "end": 906, "text": "(Paul and Dredze, 2012;", "ref_id": "BIBREF45" }, { "start": 907, "end": 930, "text": "Choudhury et al., 2013;", "ref_id": "BIBREF11" }, { "start": 972, "end": 990, "text": "(Yin et al., 2015;", "ref_id": "BIBREF63" }, { "start": 991, "end": 1020, "text": "Karisani and Agichtein, 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Text Mining", "sec_num": "2.3" }, { "text": "A small number of studies looked into the comparison of biomedical information in social media and scientific text: Thorne and Klinger (2018) analyze quantitatively how disease names are referred to across these domains. Seiffe et al. (2020) analyze laypersons' medical vocabulary.", "cite_spans": [ { "start": 221, "end": 241, "text": "Seiffe et al. (2020)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Text Mining", "sec_num": "2.3" }, { "text": "As the basis for our study, we collect a novel Twitter corpus in which we annotate which tweets contain biomedical claims, and (for all explicit claims) which tokens correspond to that claim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Creation and Analysis", "sec_num": "3" }, { "text": "The data for the corpus was collected in June/July 2020 using Twitter's API 1 which offers a keywordbased retrieval for tweets. Table 1 provides a sample of the search terms we used. 2 For each of the medical topics, we sample English tweets from keywords and phrases from four different query categories. This includes (1) the name of the disease as well as the respective hashtag for each topic, e.g., depression and #depression, (2) topical hashtags like #vaccineswork, (3) combinations of the disease name with words like cure, treatment or therapy as well as their respective verb forms, and (4) a list of medications, products, and product brand names from the pharmaceutical database DrugBank 3 .", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data Selection & Acquisition", "sec_num": "3.1" }, { "text": "When querying the tweets, we exclude retweets by using the API's '-filter:retweets' option. From overall 902,524 collected tweets, we filter out those with URLs since those are likely to be advertisements (Cocos et al., 2017; Ma et al., 2018) , and further remove duplicates based on the tweet IDs. From the resulting collection of 127,540 messages we draw a sample of 75 randomly selected tweets per topic (four biomedical topics) and search term category (four categories per topic). The final corpus to be annotated consists of 1200 tweets about four medical issues and their treatments: measles, depression, cystic fibrosis, and COVID-19.", "cite_spans": [ { "start": 205, "end": 225, "text": "(Cocos et al., 2017;", "ref_id": "BIBREF12" }, { "start": 226, "end": 242, "text": "Ma et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Data Selection & Acquisition", "sec_num": "3.1" }, { "text": "While there are different schemes and models of argumentative structure varying in complexity as well as in their conceptualization of claims, the claim element is widely considered the core component of an argument . suggest a framework in which an argument consists of two main components: a claim and premises. We follow and define the claim as the argumentative component in which the speaker or writer expresses the central, controversial conclusion of their argument. This claim is presented as if it were true even though objectively it can be true or false (Mochales and Ieven, 2009) . The premise which is considered the second part of an argument includes all elements that are used either to substantiate or disprove the claim. Arguments can contain multiple premises to justify the claim. (Refer to Section 3.4 for examples and a detailed analysis of argumentative tweets in the dataset.)", "cite_spans": [ { "start": 565, "end": 591, "text": "(Mochales and Ieven, 2009)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Conceptual Definition", "sec_num": "3.2.1" }, { "text": "For our corpus, we focus on the claim element and assign all tweets a binary label that indicates whether the document contains a claim. Claims can be either explicitly voiced or the claim property can be inferred from the text in cases in which they are expressed implicitly . We therefore annotate explicitness or implicitness if a tweet is labeled as containing a claim. For explicit cases the claim sequence is additionally marked on the token level. For implicit cases, the claim which can be inferred from the implicit utterance is stated alongside the implicitness annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conceptual Definition", "sec_num": "3.2.1" }, { "text": "We define a preliminary set of annotation guidelines based on previous work (Mochales and Ieven, 2009; Bosc et al., 2016a; . To adapt those to our domain and topic, we go through four iterations of refinements. In each iteration, 20 tweets receive annotations by two annotators. Both annotators are female and aged 25-30. Annotator A1 has a background in linguistics and computational linguistics. A2 has a background in mathematics, computer science, and computational linguistics. The results are discussed based on the calculation of Cohen's \u03ba (Cohen, 1960) .", "cite_spans": [ { "start": 76, "end": 102, "text": "(Mochales and Ieven, 2009;", "ref_id": "BIBREF39" }, { "start": 103, "end": 122, "text": "Bosc et al., 2016a;", "ref_id": "BIBREF8" }, { "start": 547, "end": 560, "text": "(Cohen, 1960)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Guideline Development", "sec_num": "3.2.2" }, { "text": "After Iteration 1, we did not make any substantial changes, but reinforced a common understanding of the existing guidelines in a joint discussion. After Iteration 2, we clarified the guidelines by adding the notion of an argumentative intention as a prerequisite for a claim: a claim is only to be annotated if the author actually appears to be intentionally argumentative as opposed to just sharing an opinion (\u0160najder, 2016; Habernal and Gurevych, 2017). This is illustrated in the following example, which is not to be annotated as a claim, given this additional constraint:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guideline Development", "sec_num": "3.2.2" }, { "text": "This popped up on my memories from two years ago, on Instagram, and honestly I'm so much healthier now it's quite unbelievable. A stone heavier, on week 11 of no IVs (back then it was every 9 weeks), and it's all thanks to #Trikafta and determination. I am stronger than I think.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guideline Development", "sec_num": "3.2.2" }, { "text": "We further clarified the guidelines with regards to the claim being the conclusive element in a Twitter document. This change encouraged the annotators to reflect specifically if the conclusive, main claim is conveyed explicitly or implicitly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guideline Development", "sec_num": "3.2.2" }, { "text": "After Iteration 3, we did not introduce any changes, but went through an additional iteration to further establish the understanding of the annotation tasks. Table 2 shows the results of the agreement of the annotators in each iteration as well as the final \u03ba-score for the corpus. We observe that the agreement substantially increased from Iteration 1 to 4. However, we also observe that obtaining a substantial agreement for the span annotation remains the most challenging task.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Guideline Development", "sec_num": "3.2.2" }, { "text": "The corpus annotation was carried out by the same annotators that conducted the preliminary annotations. A1 labeled 1000 tweets while A2 annotated 300 instances. From these both sets, 100 tweets were provided to both annotators, to track agreement (which remained stable, see Table 2 ). Annotating 100 tweets took approx. 3.3 hours. Overall, we observe that the agreement is generally moderate. Separating claim-tweets from non-claim tweets shows an acceptable \u03ba=.56. Including the decision of explicitness/implicitness leads to \u03ba=.48. The span-based annotation has limited agreement, with \u03ba=.38 (which is why we do not consider this task further in this paper). These numbers are roughly in line with previous work. Achakulvisut et al. 2019report an average \u03ba=0.63 for labeling claims in biomedical research papers. According to , explicit, intentional argumentation is easier to annotate than texts which are less explicit. Our corpus is available with detailed annotation guidelines at http://www.ims.uni-stuttgart.de/data/ bioclaim. The longest tweet in the corpus consists of 110 tokens 4 , while the two shortest consist only of two 4 The tweet includes 50 @-mentions followed by a measlesrelated claim: \"Oh yay! I can do this too, since you're going to ignore the thousands of children who died in outbreaks last year from measles... Show me a proven death of a child from vaccines in the last decade. That's the time reference, now? So let's see a death certificate that says it, thx\" id Instance", "cite_spans": [ { "start": 1139, "end": 1140, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 276, "end": 283, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Annotation Procedure", "sec_num": "3.2.3" }, { "text": "The French have had great success #hydroxycloroquine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Death is around 1/1000 in measles normally, same for encephalopathy, hospitalisation around 1/5. With all the attendant costs, the vaccine saves money, not makes it. 3 Latest: Kimberly isn't worried at all. She takes #Hydroxychloroquine and feels awesome the next day. Just think, it's more dangerous to drive a car than to catch corona 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Lol exactly. It's not toxic to your body idk where he pulled this information out of. Acid literally cured my depression/anxiety I had for 5 years in just 5 months (3 trips). It literally reconnects parts of your brain that haven't had that connection in a long time. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Hopefully! The MMR toxin loaded vaccine I received many years ago seemed to work very well. More please! 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Wow! Someone tell people with Cystic fibrosis and Huntington's that they can cure their genetics through Mormonism! tokens 5 . On average, a claim tweet has a length of \u224840 tokens. Both claim tweet types, explicit and implicit, have similar lengths (39.89 and 39.88 tokens respectively). In contrast to that, the average non-claim tweet is shorter, consisting of about 30 tokens. Roughly half of an explicit claim corresponds to the claim phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "We generally see that there is a connection between the length of a tweet and its class membership. Out of all tweets with up to 40 tokens, 453 instances are non-claims, while 243 contain a claim. For the instances that consist of 41 and more tokens, only 210 are non-claim tweets, whereas 294 are labeled as claims. The majority of the shorter tweets (\u2264 40 tokens) tend to be non-claim instances, while mid-range to longer tweets (\u2265 40 tokens) tend to be members of the claim class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "To obtain a better understanding of the corpus, we perform a qualitative analysis on a subsample of 50 claim-instances/topic. We manually analyze four claim properties: the tweet exhibits an incomplete argument structure, different argument components blend into each other, the text shows anecdotal evidence, and it describes the claim implicitly. Refer to Table 4 for an overview of the results.", "cite_spans": [], "ref_spans": [ { "start": 358, "end": 365, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.4" }, { "text": "In line with \u0160najder (2016), we find that argument structures are often incomplete, e.g., in-stances only contain a stand-alone claim without any premise. This characteristic is most prevalent in the COVID-19-related tweets In Table 5 , Ex. 1 is missing a premising element, Ex. 2 presents premise and claim.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 234, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.4" }, { "text": "Argument components (claim, premise) are not very clear cut and often blend together. Consequently they can be difficult to differentiate, for instance when authors use claim-like elements as a premise. This characteristic is again, most prevalent for COVID-19. In Ex. 3 in Table 5 , the last sentence reads like a claim, especially when looked at in isolation, yet it is in fact used by the author to explain their claim.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.4" }, { "text": "Premise elements which substantiate and give reason for the claim (Bosc et al., 2016b) traditionally include references to studies or mentions of expert testimony, but occasionally also anecdotal evidence or concrete examples . We find the latter to be very common for our data set. This property is most frequent for cystic fibrosis and depression. Ex. 4 showcases how a personal experience is used to build an argument.", "cite_spans": [ { "start": 66, "end": 86, "text": "(Bosc et al., 2016b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.4" }, { "text": "Implicitness in the form of irony, sarcasm or rhetoric questions are common features for these types of claims on Twitter. We observe claims related to cystic fibrosis are most often (in our sample) implicit. Ex. 5 and 6 show instances that use sarcasm or irony. The fact that implicitness is such a common feature in our dataset is in line with the observation that implicitness is a characteristic device not only in spoken language and everyday, informal argumentation (Lumer, 1990) , but also in user-generated web content in general .", "cite_spans": [ { "start": 472, "end": 485, "text": "(Lumer, 1990)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "3.4" }, { "text": "In the following sections we describe the conceptual design of our experiments and introduce the models that we use to accomplish the claim detection task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We model the task in a set of different model configurations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Tasks", "sec_num": "4.1" }, { "text": "Multiclass. A trained classifier distinguishes between exlicit claim, implicit claim, and non-claim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binary. A trained classifier distinguishes between claim and non-claim.", "sec_num": null }, { "text": "Multiclass Pipeline. A first classifier learns to discriminate between claims and non-claims (as in Binary). Each tweet that is classified as claim is further separated into implicit or explicit with another binary classifier. The secondary classifier is trained on gold data (not on predictions of the first model in the pipeline).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binary. A trained classifier distinguishes between claim and non-claim.", "sec_num": null }, { "text": "For each of the classification tasks (binary/multiclass, steps in the pipeline), we use a set of standard text classification methods which we compare. The first three models (NB, LG, BiLSTM) use 50-dimensional FastText (Bojanowski et al., 2017) embeddings trained on the Common Crawl corpus (600 billion tokens) as input 6 .", "cite_spans": [ { "start": 220, "end": 245, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "NB. We use a (Gaussian) naive Bayes with an average vector of the token embeddings as input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "LG. We use a logistic regression classifier with the same features as in NB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "BiLSTM. As a classifier which can consider contextual information and makes use of pretrained embeddings, we use a bidirectional long short-term memory network (Hochreiter and Schmidhuber, 1997) with 75 LSTM units followed by the output layer (sigmoid for binary classification, softmax for multiclass).", "cite_spans": [ { "start": 160, "end": 194, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "BERT. We use the pretrained BERT (Devlin et al., 2019) base model 7 and fine-tune it using the claim tweet corpus.", "cite_spans": [ { "start": 33, "end": 54, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "With the first experiment we explore how reliably we can detect claim tweets in our corpus and how well the two different claim types (explicit vs. implicit claim tweets) can be distinguished. We use each model mentioned in Section 4.2 in each setting described in Section 4.1. We evaluate each classifier in a binary or (where applicable) in a multi-class setting, to understand if splitting the claim category into its subcomponents improves the claim prediction overall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Claim Detection", "sec_num": "5.1" }, { "text": "LG LSTM BERT Eval. Task Class P R F1 P R F1 P R F1 P R F1 binary binary claim . 67 .65 .66 .66 .74 .70 .68 .48 .57 .66 .72 .69 n-claim .75 .77 .76 .79 .72 .76 .69 .84 .75 .78 .72 .75 multiclass claim .66 .65 .66 .73 .53 .61 .75 .35 .48 .81 .49 .61 n-claim .74 .76 .75 .71 .85 .78 .66 .91 .76 .71 .91 .80 multi-class multiclass expl .55 .45 .50 .63 .39 .48 .59 .27 .37 .62 .45 .52 impl .31 .44 .36 .33 .35 .34 .18 .09 .12 .29 .09 .13 n-claim .74 .76 .75 .71 .85 .78 .66 .91 .76 .71 .91 .80 pipeline expl .56 .45 .50 .52 .55 .53 .50 .37 .43 .54 .65 .59 impl .31 .44 .36 .28 .35 .31 .07 .04 .05 .26 .22 .24 n-claim .75 .77 .76 .79 .72 .76 .69 .84 .75 .78 .72 .75 Table 6 : Results for the claim detection experiments, separated into binary and multi-class evaluation. The best F 1 scores for each evaluation setting and class are printed in bold face.", "cite_spans": [ { "start": 98, "end": 673, "text": "67 .65 .66 .66 .74 .70 .68 .48 .57 .66 .72 .69 n-claim .75 .77 .76 .79 .72 .76 .69 .84 .75 .78 .72 .75 multiclass claim .66 .65 .66 .73 .53 .61 .75 .35 .48 .81 .49 .61 n-claim .74 .76 .75 .71 .85 .78 .66 .91 .76 .71 .91 .80 multi-class multiclass expl .55 .45 .50 .63 .39 .48 .59 .27 .37 .62 .45 .52 impl .31 .44 .36 .33 .35 .34 .18 .09 .12 .29 .09 .13 n-claim .74 .76 .75 .71 .85 .78 .66 .91 .76 .71 .91 .80 pipeline expl .56 .45 .50 .52 .55 .53 .50 .37 .43 .54 .65 .59 impl .31 .44 .36 .28 .35 .31 .07 .04 .05 .26 .22 .24 n-claim .75 .77 .76 .79 .72 .76 .69 .84 .75 .78 .72", "ref_id": null } ], "ref_spans": [ { "start": 19, "end": 97, "text": "Task Class P R F1 P R F1 P R F1 P R F1 binary binary claim .", "ref_id": "TABREF1" }, { "start": 678, "end": 685, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "NB", "sec_num": null }, { "text": "From our corpus of 1200 tweets we use 800 instances for training, 200 as validation data to optimize hyperparameters and 200 as test data. We tokenize the documents and substitute all @-mentions by \"@username\". For the LG models, we use an l2 regularization. For the LSTM models, the hyper-parameters learning rate, dropout, number of epochs, and batch size were determined by a randomized search over a parameter grid and also use l2 regularization. For training, we use Adam (Kingma and Ba, 2015). For the BERT models, we experiment with combinations of the recommended fine-tuning hyper-parameters from Devlin et al. (2019) (batch size, learning rate, epochs), and use those with the best performance on the validation data. An overview of all hyper-parameters is provided in Table 9 in the Appendix. For the Bi-LSTM, we use the Keras API (Chollet et al., 2015) for TensorFlow (Abadi et al., 2015) . For the BERT model, we use Simple Transformers (Rajapakse, 2019) and its wrapper for the Hugging Face transformers library (Wolf et al., 2020) . Further, we oversample the minority class of implicit claims to achieve a balanced training set (the test set remains with the original distribution). To ensure comparability, we oversample in both the binary and the multi-class setting. For parameters that we do not explicitly mention, we use default values. Table 6 reports the results for the conducted experiments. The top half lists the results for the binary claim detection setting. The bottom half of the table presents the results for the multi-class claim classification.", "cite_spans": [ { "start": 842, "end": 864, "text": "(Chollet et al., 2015)", "ref_id": null }, { "start": 880, "end": 900, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" }, { "start": 950, "end": 967, "text": "(Rajapakse, 2019)", "ref_id": "BIBREF46" }, { "start": 1026, "end": 1045, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF61" } ], "ref_spans": [ { "start": 779, "end": 786, "text": "Table 9", "ref_id": null }, { "start": 1359, "end": 1366, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.1.1" }, { "text": "For the binary evaluation setting, we observe that casting the problem as a ternary prediction task is not beneficial -the best F 1 score is obtained with the binary LG classifier (.70 F 1 for the class claim in contrast to .61 F 1 for the ternary LG). The BERT and NB approaches are slightly worse (1 pp and 4pp less for binary, respectively), while the LSTM shows substantially lower performance (13pp less).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1.2" }, { "text": "In the ternary/multi-class evaluation, the scores are overall lower. The LSTM shows the lowest performance. The best result is obtained in the pipeline setting, in which separate classifiers can focus on distinguishing claim/non-claim and explicit/implicit -we see .59 F 1 for the explicit claim class. Implicit claim detection is substantially more challenging across all classification approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1.2" }, { "text": "We attribute the fact that the more complex models (LSTM, BERT) do not outperform the linear models across the board to the comparably small size of the dataset. This appears especially true for implicit claims in the multi-class setting. Here, those models struggle the most to predict implicit claims, indicating that they were not able to learn sufficiently from the training instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1.2" }, { "text": "From a manual introspection of the best performing model in the binary setting, we conclude that it is difficult to detect general patterns. We show two cases of false positives and two cases of false negatives in Table 7 . The false positive instances show that the model struggles with cases that rely on judging the argumentative intention. Both Ex. 1 and 2 contain potential claims about depression and therapy, but they have not been annotated as such, because the authors' intention is motivational rather than argumentative. In addition, it appears that the model struggles to detect implicit claims that are expressed using irony (Ex. 3) or a rhetorical question (Ex. 4).", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 221, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.1.3" }, { "text": "We see that the models show acceptable performance in a binary classification setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-domain Experiment", "sec_num": "5.2" }, { "text": "In the following, we analyze if this observation holds across domains or if information from another outof-domain corpus can help.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-domain Experiment", "sec_num": "5.2" }, { "text": "As the binary LG model achieved the best results during the previous experiment, we use this classifier for the cross-domain experiments. We work with paragraphs of persuasive essays as a comparative corpus. The motivation to use this resource is that while they are a distinctly different text type and usually linguistically much more formal than tweets, they are also opinionated documents. 8 We use the resulting essay model for making an in-domain as well as a cross-domain prediction and vice versa for the Twitter model. We further experiment with combining the training portions of both datasets and evaluate its performance for both target domains.", "cite_spans": [ { "start": 394, "end": 395, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Cross-domain Experiment", "sec_num": "5.2" }, { "text": "The comparative corpus contains persuasive essays with annotated argument structure . used this cor-8 An essay is defined as \"a short piece of writing on a particular subject, often expressing personal views\" (https: //dictionary.cambridge.org/dictionary/english/essay). Table 8 : Results of cross-domain experiments using the binary LG model on the Twitter and the essay corpus. We report precision, recall and F 1 for the claim tweet class.", "cite_spans": [], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.2.1" }, { "text": "pus subsequently and provide the data in CONLLformat, split into paragraphs, and predivided into train, development and test set. 9 We use their version of the corpus. The annotations for the essay corpus distinguish between major claims and claims. However, since there is no such hierarchical differentiation in the Twitter annotations, we consider both types as equivalent. We choose to use paragraphs instead of whole essays as the individual input documents for the classification and assign a claim label to every paragraph that contains a claim. This leaves us with 1587 essay paragraphs as training data, and 199 and 449 paragraphs respectively for validation and testing. We follow the same setup as for the binary setting in the first experiment.", "cite_spans": [ { "start": 130, "end": 131, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.2.1" }, { "text": "In Table 8 , we summarize the results of the crossdomain experiments with the persuasive essay corpus. We see that the essay model is successful for classifying claim documents (.98 F 1 ) in the indomain experiment. Compared to the in-domain setting for tweets all evaluation scores measure substantially higher.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2.2" }, { "text": "When we compare the two cross-domain experiments, we observe that the performance measures decrease in both settings when we use the out-ofdomain model to make predictions (11pp in F 1 for tweets, 15pp for essays). Combining the training portions of both data sets does not lead to an improvement over in-domain experiments. This shows the challenge of building domain-generic models that perform well across different data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2.2" }, { "text": "In this paper, we presented the first data set for biomedical claim detection in social media. In our first experiment, we showed that we can achieve an acceptable performance to detect claims when the distinction between explicit or implicit claims is not considered. In the cross-domain experiment, we see that text formality, which is one of the main distinguishing feature between the two corpora, might be an important factor that influences the level of difficulty in accomplishing the claim detection task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "Our hypothesis in this work was that biomedical information on Twitter exhibits a challenging setting for claim detection. Both our experiments indicate that this is true. Future work needs to investigate what might be reasons for that. We hypothesize that our Twitter dataset contains particular aspects that are specific to the medical domain, but it might also be that other latent variables lead to confounders (e.g., the time span that has been used for crawling). It is important to better understand these properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "We suggest future work on claim detection models optimize those to work well across domains. To enable such research, this paper contributed a novel resource. This resource could further be improved. One way of addressing the moderate agreement between the annotators could be to include annotators with medical expertise to see if this ultimately facilitates claim annotation. Additionally, a detailed introspection of the topics covered in the tweets for each disease would be interesting for future work since this might shed some light on which topical categories of claims are particularly difficult to label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "The COVID-19 pandemic has sparked recent research with regards to detecting misinformation and fact-checking claims (e.g., Hossain et al. (2020) or Wadden et al. (2020) ). Exploring how a claimdetection-based fact-checking approach rooted in argument mining compares to other approaches is up to future research.", "cite_spans": [ { "start": 123, "end": 144, "text": "Hossain et al. (2020)", "ref_id": null }, { "start": 148, "end": 168, "text": "Wadden et al. (2020)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "https://developer.twitter.com/en/docs/twitter-api 2 The full list of search terms (1771 queries in total) is available in the supplementary material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://go.drugbank.com/. At the time of creating the search term list, COVID-19 was not included in DrugBank. Instead, medications which were under investigation at the time of compiling this list as outlined on the WHO website were included for Sars-CoV-2 in this category: https://www. who.int/emergencies/diseases/novel-coronavirus-2019/ global-research-on-novel-coronavirus-2019-ncov/ solidarity-clinical-trial-for-covid-19-treatments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"Xanax damage\" and \"Holy fuck\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://fasttext.cc/docs/en/english-vectors.html 7 https://huggingface.co/bert-base-uncased", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/UKPLab/acl2017-neural_end2end_ am/tree/master/data/conll/Paragraph_Level", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Automatic Fact Checking for Biomedical Information in Social Media and Scientific Literature, https://www.ims. uni-stuttgart.de/en/research/projects/fibiss/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research has been conducted as part of the FIBISS project 10 which is funded by the German Research Council (DFG, project number: KL 2869/5-1). We thank Laura Ana Maria Oberl\u00e4nder for her support and the anonymous reviewers for their valuable comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur ; Martin Wicke", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaoqiang", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefow- icz, Lukasz Kaiser, Manjunath Kudlur, Josh Leven- berg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Mar- tin Wattenberg, Martin Wicke, Yuan Yu, and Xiao- qiang Zheng. 2015. TensorFlow: Large-scale ma- chine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Claim extraction in biomedical publications using deep discourse model and transfer learning", "authors": [ { "first": "Titipat", "middle": [], "last": "Achakulvisut", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Acuna", "suffix": "" }, { "first": "Konrad", "middle": [], "last": "Kording", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.00962" ] }, "num": null, "urls": [], "raw_text": "Titipat Achakulvisut, Chandra Bhagavatula, Daniel Acuna, and Konrad Kording. 2019. Claim ex- traction in biomedical publications using deep dis- course model and transfer learning. arXiv preprint arXiv:1907.00962.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A study of controversial topics on social media", "authors": [ { "first": "Aseel", "middle": [], "last": "Addawood", "suffix": "" }, { "first": "Masooda", "middle": [], "last": "Bashir", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Third Workshop on Argument Mining (ArgMining2016)", "volume": "", "issue": "", "pages": "1--11", "other_ids": { "DOI": [ "10.18653/v1/W16-2801" ] }, "num": null, "urls": [], "raw_text": "Aseel Addawood and Masooda Bashir. 2016. \"What is your evidence?\" A study of controversial topics on social media. In Proceedings of the Third Work- shop on Argument Mining (ArgMining2016), pages 1-11, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics", "authors": [ { "first": "Ehud", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Anatoly", "middle": [], "last": "Polnarov", "suffix": "" }, { "first": "Tamar", "middle": [], "last": "Lavee", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Gutfreund", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "64--68", "other_ids": { "DOI": [ "10.3115/v1/W14-2109" ] }, "num": null, "urls": [], "raw_text": "Ehud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfre- und, and Noam Slonim. 2014. A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In Proceedings of the First Workshop on Argumentation Mining, pages 64-68, Baltimore, Maryland. Association for Com- putational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic detection of answers to research questions from Medline abstracts", "authors": [ { "first": "Abdulaziz", "middle": [], "last": "Alamri", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of BioNLP 15", "volume": "", "issue": "", "pages": "141--146", "other_ids": { "DOI": [ "10.18653/v1/W15-3817" ] }, "num": null, "urls": [], "raw_text": "Abdulaziz Alamri and Mark Stevenson. 2015. Au- tomatic detection of answers to research questions from Medline abstracts. In Proceedings of BioNLP 15, pages 141-146, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic identification of potentially contradictory claims to support systematic reviews", "authors": [ { "first": "Abdulaziz", "middle": [], "last": "Alamri", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stevensony", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), BIBM '15", "volume": "", "issue": "", "pages": "930--937", "other_ids": { "DOI": [ "10.1109/BIBM.2015.7359808" ] }, "num": null, "urls": [], "raw_text": "Abdulaziz Alamri and Mark Stevensony. 2015. Au- tomatic identification of potentially contradictory claims to support systematic reviews. In Proceed- ings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), BIBM '15, page 930-937, USA. IEEE Computer Society.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Beyond genes, proteins, and abstracts: Identifying scientific claims from full-text biomedical articles", "authors": [ { "first": "Catherine", "middle": [], "last": "Blake", "suffix": "" } ], "year": 2010, "venue": "Journal of biomedical informatics", "volume": "43", "issue": "2", "pages": "173--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catherine Blake. 2010. Beyond genes, proteins, and abstracts: Identifying scientific claims from full-text biomedical articles. Journal of biomedical informat- ics, 43(2):173-189.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "DART: a dataset of arguments and their relations on Twitter", "authors": [ { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1258--1263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bosc, Elena Cabrio, and Serena Villata. 2016a. DART: a dataset of arguments and their relations on Twitter. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 1258-1263, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tweeties squabbling: Positive and negative results in applying argument mining on social media", "authors": [ { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2016, "venue": "COMMA", "volume": "", "issue": "", "pages": "21--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bosc, Elena Cabrio, and Serena Villata. 2016b. Tweeties squabbling: Positive and negative re- sults in applying argument mining on social media. COMMA, 2016:21-32.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Social media as a measurement tool of depression in populations", "authors": [ { "first": "Scott", "middle": [], "last": "Munmun De Choudhury", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Counts", "suffix": "" }, { "first": "", "middle": [], "last": "Horvitz", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 5th ACM International Conference on Web Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Munmun De Choudhury, Scott Counts, and Eric Horvitz. 2013. Social media as a measurement tool of depression in populations. In In Proceedings of the 5th ACM International Conference on Web Sci- ence (Paris, France, May 2-May 4, 2013). WebSci 2013.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in twitter posts", "authors": [ { "first": "Anne", "middle": [], "last": "Cocos", "suffix": "" }, { "first": "Alexander", "middle": [ "G" ], "last": "Fiks", "suffix": "" }, { "first": "Aaron", "middle": [ "J" ], "last": "Masino", "suffix": "" } ], "year": 2017, "venue": "Journal of the American Medical Informatics Association", "volume": "24", "issue": "4", "pages": "813--821", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Cocos, Alexander G Fiks, and Aaron J Masino. 2017. Deep learning for pharmacovigilance: re- current neural network architectures for labeling adverse drug reactions in twitter posts. Journal of the American Medical Informatics Association, 24(4):813-821.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A coefficient of agreement for nominal scales. Educational and psychological measurement", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "", "volume": "20", "issue": "", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement, 20(1):37-46.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Mining Social Media Data for Biomedical Signals and Health-Related Behavior", "authors": [ { "first": "Ian", "middle": [ "B" ], "last": "Rion Brattig Correia", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Wood", "suffix": "" }, { "first": "Luis", "middle": [ "M" ], "last": "Bollen", "suffix": "" }, { "first": "", "middle": [], "last": "Rocha", "suffix": "" } ], "year": 2020, "venue": "Annual Review of Biomedical Data Science", "volume": "3", "issue": "1", "pages": "433--458", "other_ids": { "DOI": [ "10.1146/annurev-biodatasci-030320-040844" ] }, "num": null, "urls": [], "raw_text": "Rion Brattig Correia, Ian B. Wood, Johan Bollen, and Luis M. Rocha. 2020. Mining Social Me- dia Data for Biomedical Signals and Health- Related Behavior. Annual Review of Biomed- ical Data Science, 3(1):433-458. _eprint: https://doi.org/10.1146/annurev-biodatasci-030320- 040844.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "What is the essence of a claim? Cross-domain claim identification", "authors": [ { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2055--2066", "other_ids": { "DOI": [ "10.18653/v1/D17-1218" ] }, "num": null, "urls": [], "raw_text": "Johannes Daxenberger, Steffen Eger, Ivan Habernal, Christian Stab, and Iryna Gurevych. 2017. What is the essence of a claim? Cross-domain claim identi- fication. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2055-2066, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Argument mining on Twitter: Arguments, facts and sources", "authors": [ { "first": "Mihai", "middle": [], "last": "Dusmanu", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2317--2322", "other_ids": { "DOI": [ "10.18653/v1/D17-1245" ] }, "num": null, "urls": [], "raw_text": "Mihai Dusmanu, Elena Cabrio, and Serena Villata. 2017. Argument mining on Twitter: Arguments, facts and sources. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2317-2322, Copenhagen, Den- mark. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural end-to-end learning for computational argumentation mining", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "11--22", "other_ids": { "DOI": [ "10.18653/v1/P17-1002" ] }, "num": null, "urls": [], "raw_text": "Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 11-22, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Argument extraction for supporting public policy formulation", "authors": [ { "first": "Eirini", "middle": [], "last": "Florou", "suffix": "" }, { "first": "Stasinos", "middle": [], "last": "Konstantopoulos", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities", "volume": "", "issue": "", "pages": "49--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eirini Florou, Stasinos Konstantopoulos, Antonis Koukourikos, and Pythagoras Karampiperis. 2013. Argument extraction for supporting public policy formulation. In Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 49-54, Sofia, Bul- garia. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Transfer learning for biomedical named entity recognition with neural networks", "authors": [ { "first": "M", "middle": [], "last": "John", "suffix": "" }, { "first": "Gary D", "middle": [], "last": "Giorgi", "suffix": "" }, { "first": "", "middle": [], "last": "Bader", "suffix": "" } ], "year": 2018, "venue": "Bioinformatics", "volume": "34", "issue": "23", "pages": "4087--4094", "other_ids": { "DOI": [ "10.1093/bioinformatics/bty449" ] }, "num": null, "urls": [], "raw_text": "John M Giorgi and Gary D Bader. 2018. Transfer learning for biomedical named entity recognition with neural networks. Bioinformatics, 34(23):4087- 4094.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Argument extraction from news, blogs, and social media", "authors": [ { "first": "Theodosis", "middle": [], "last": "Goudas", "suffix": "" }, { "first": "Christos", "middle": [], "last": "Louizos", "suffix": "" } ], "year": 2014, "venue": "In Artificial Intelligence: Methods and Applications", "volume": "", "issue": "", "pages": "287--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theodosis Goudas, Christos Louizos, Georgios Petasis, and Vangelis Karkaletsis. 2014. Argument extrac- tion from news, blogs, and social media. In Artifi- cial Intelligence: Methods and Applications, pages 287-299, Cham. Springer International Publishing.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Argumentation mining in user-generated web discourse", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "1", "pages": "125--179", "other_ids": { "DOI": [ "10.1162/COLI_a_00276" ] }, "num": null, "urls": [], "raw_text": "Ivan Habernal and Iryna Gurevych. 2017. Argumenta- tion mining in user-generated web discourse. Com- putational Linguistics, 43(1):125-179.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Deep learning with word embeddings improves biomedical named entity recognition", "authors": [ { "first": "Maryam", "middle": [], "last": "Habibi", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Weber", "suffix": "" }, { "first": "Mariana", "middle": [], "last": "Neves", "suffix": "" }, { "first": "David", "middle": [ "Luis" ], "last": "Wiegandt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Leser", "suffix": "" } ], "year": 2017, "venue": "Bioinformatics", "volume": "33", "issue": "14", "pages": "37--48", "other_ids": { "DOI": [ "10.1093/bioinformatics/btx228" ] }, "num": null, "urls": [], "raw_text": "Maryam Habibi, Leon Weber, Mariana Neves, David Luis Wiegandt, and Ulf Leser. 2017. Deep learning with word embeddings improves biomed- ical named entity recognition. Bioinformatics, 33(14):i37-i48.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Misinformation on Social Media", "authors": [], "year": null, "venue": "EMNLP 2020, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.nlpcovid19-2.11" ] }, "num": null, "urls": [], "raw_text": "Misinformation on Social Media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Did you really just have a heart attack? Towards robust detection of personal health mentions in social media", "authors": [ { "first": "Payam", "middle": [], "last": "Karisani", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 World Wide Web Conference", "volume": "", "issue": "", "pages": "137--146", "other_ids": { "DOI": [ "10.1145/3178876.3186055" ] }, "num": null, "urls": [], "raw_text": "Payam Karisani and Eugene Agichtein. 2018. Did you really just have a heart attack? Towards robust de- tection of personal health mentions in social me- dia. In Proceedings of the 2018 World Wide Web Conference, page 137-146, Republic and Canton of Geneva, CHE.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Identifying and classifying subjective claims", "authors": [ { "first": "Namhee", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Stuart", "middle": [ "W" ], "last": "Shulman", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 8th Annual International Conference on Digital Government Research: Bridging Disciplines & Domains, dg.o '07", "volume": "", "issue": "", "pages": "76--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Namhee Kwon, Liang Zhou, Eduard Hovy, and Stu- art W. Shulman. 2007. Identifying and classifying subjective claims. In Proceedings of the 8th Annual International Conference on Digital Government Re- search: Bridging Disciplines & Domains, dg.o '07, page 76-81. Digital Government Society of North America.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Bo-lstm: classifying relations via long short-term memory networks along biomedical ontologies", "authors": [ { "first": "Andre", "middle": [], "last": "Lamurias", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Sousa", "suffix": "" }, { "first": "Luka", "middle": [ "A" ], "last": "Clarke", "suffix": "" }, { "first": "Francisco", "middle": [ "M" ], "last": "Couto", "suffix": "" } ], "year": 2019, "venue": "BMC Bioinformatics", "volume": "20", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1186/s12859-018-2584-5" ] }, "num": null, "urls": [], "raw_text": "Andre Lamurias, Diana Sousa, Luka A. Clarke, and Francisco M. Couto. 2019. Bo-lstm: classify- ing relations via long short-term memory networks along biomedical ontologies. BMC Bioinformatics, 20(1):10.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "Bioinformatics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1093/bioinformatics/btz682" ] }, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Context dependent claim detection", "authors": [ { "first": "Ran", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bilu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1489--1500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context depen- dent claim detection. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 1489- 1500, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Unsupervised corpus-wide claim detection", "authors": [ { "first": "Ran", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Gretz", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Sznajder", "suffix": "" }, { "first": "Shay", "middle": [], "last": "Hummel", "suffix": "" }, { "first": "Ranit", "middle": [], "last": "Aharonov", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "79--84", "other_ids": { "DOI": [ "10.18653/v1/W17-5110" ] }, "num": null, "urls": [], "raw_text": "Ran Levy, Shai Gretz, Benjamin Sznajder, Shay Hum- mel, Ranit Aharonov, and Noam Slonim. 2017. Un- supervised corpus-wide claim detection. In Pro- ceedings of the 4th Workshop on Argument Mining, pages 79-84, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A BERTbased universal model for both within-and crosssentence clinical temporal relation extraction", "authors": [ { "first": "Chen", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "Guergana", "middle": [], "last": "Savova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "65--71", "other_ids": { "DOI": [ "10.18653/v1/W19-1908" ] }, "num": null, "urls": [], "raw_text": "Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2019. A BERT- based universal model for both within-and cross- sentence clinical temporal relation extraction. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 65-71, Minneapolis, Minnesota, USA. Association for Computational Linguistics, Association for Computational Linguis- tics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Contextindependent claim detection for argument mining", "authors": [ { "first": "Marco", "middle": [], "last": "Lippi", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Torroni", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15", "volume": "", "issue": "", "pages": "185--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lippi and Paolo Torroni. 2015. Context- independent claim detection for argument mining. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, page 185-191. AAAI Press.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Praktische Argumentationstheorie: theoretische Grundlagen, praktische Begr\u00fcndung und Regeln wichtiger Argumentationsarten. Hochschulschrift", "authors": [ { "first": "Christoph", "middle": [], "last": "Lumer", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Lumer. 1990. Praktische Argumentation- stheorie: theoretische Grundlagen, praktische Be- gr\u00fcndung und Regeln wichtiger Argumentation- sarten. Hochschulschrift, University of M\u00fcnster, Braunschweig.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "CRST: a claim retrieval system in Twitter", "authors": [ { "first": "Wenjia", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wenhan", "middle": [], "last": "Chao", "suffix": "" }, { "first": "Zhunchen", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "43--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenjia Ma, WenHan Chao, Zhunchen Luo, and Xin Jiang. 2018. CRST: a claim retrieval system in Twit- ter. In Proceedings of the 27th International Confer- ence on Computational Linguistics: System Demon- strations, pages 43-47, Santa Fe, New Mexico. As- sociation for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Transformer-based Argument Mining for Healthcare Applications", "authors": [ { "first": "Tobias", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2020, "venue": "24th European Conference on Artificial Intelligence (ECAI2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Mayer, Elena Cabrio, and Serena Villata. 2020. Transformer-based Argument Mining for Healthcare Applications. In 24th European Conference on Ar- tificial Intelligence (ECAI2020), Santiago de Com- postela, Spain.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Creating an argumentation corpus: Do theories apply to real arguments? a case study on the legal argumentation of the echr", "authors": [ { "first": "Raquel", "middle": [], "last": "Mochales", "suffix": "" }, { "first": "Aagje", "middle": [], "last": "Ieven", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09", "volume": "", "issue": "", "pages": "21--30", "other_ids": { "DOI": [ "10.1145/1568234.1568238" ] }, "num": null, "urls": [], "raw_text": "Raquel Mochales and Aagje Ieven. 2009. Creating an argumentation corpus: Do theories apply to real ar- guments? a case study on the legal argumentation of the echr. In Proceedings of the 12th Interna- tional Conference on Artificial Intelligence and Law, ICAIL '09, page 21-30, New York, NY, USA. Asso- ciation for Computing Machinery.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Argumentation mining", "authors": [ { "first": "Raquel", "middle": [], "last": "Mochales", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2011, "venue": "Artificial Intelligence and Law", "volume": "19", "issue": "1", "pages": "1--22", "other_ids": { "DOI": [ "10.1007/s10506-010-9104-x" ] }, "num": null, "urls": [], "raw_text": "Raquel Mochales and Marie-Francine Moens. 2011. Argumentation mining. Artificial Intelligence and Law, 19(1):1-22.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Automatic detection of arguments in legal texts", "authors": [ { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Boiy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL '07", "volume": "", "issue": "", "pages": "225--230", "other_ids": { "DOI": [ "10.1145/1276318.1276362" ] }, "num": null, "urls": [], "raw_text": "Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th International Conference on Artificial Intelli- gence and Law, ICAIL '07, page 225-230, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features", "authors": [ { "first": "Azadeh", "middle": [], "last": "Nikfarjam", "suffix": "" }, { "first": "Abeed", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "O'", "middle": [], "last": "Karen", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Graciela", "middle": [], "last": "Ginn", "suffix": "" }, { "first": "", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2015, "venue": "Journal of the American Medical Informatics Association", "volume": "22", "issue": "3", "pages": "671--681", "other_ids": { "DOI": [ "10.1093/jamia/ocu041" ] }, "num": null, "urls": [], "raw_text": "Azadeh Nikfarjam, Abeed Sarker, Karen O'Connor, Rachel Ginn, and Graciela Gonzalez. 2015. Phar- macovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association, 22(3):671-681.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Parsing argued opinion structure in Twitter content", "authors": [ { "first": "Asma", "middle": [], "last": "Ouertatani", "suffix": "" }, { "first": "Ghada", "middle": [], "last": "Gasmi", "suffix": "" }, { "first": "Chiraz", "middle": [], "last": "Latiri", "suffix": "" } ], "year": 2020, "venue": "Journal of Intelligent Information Systems", "volume": "", "issue": "", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asma Ouertatani, Ghada Gasmi, and Chiraz Latiri. 2020. Parsing argued opinion structure in Twitter content. Journal of Intelligent Information Systems, pages 1-27.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Argumentation mining: The detection, classification and structure of arguments in text", "authors": [ { "first": "Raquel", "middle": [], "last": "Mochales Palau", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09", "volume": "", "issue": "", "pages": "98--107", "other_ids": { "DOI": [ "10.1145/1568234.1568246" ] }, "num": null, "urls": [], "raw_text": "Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation mining: The detection, clas- sification and structure of arguments in text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09, page 98-107, New York, NY, USA. Association for Com- puting Machinery.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A model for mining public health topics from Twitter", "authors": [ { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2012, "venue": "Health", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Paul and Mark Dredze. 2012. A model for mining public health topics from Twitter. Health, 11(1).", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Simple transformers", "authors": [ { "first": "Thilina", "middle": [], "last": "Rajapakse", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thilina Rajapakse. 2019. Simple transformers. https: //github.com/ThilinaRajapakse/simpletransformers.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "On the retrieval of wikipedia articles containing claims on controversial topics", "authors": [ { "first": "Haggai", "middle": [], "last": "Roitman", "suffix": "" }, { "first": "Shay", "middle": [], "last": "Hummel", "suffix": "" }, { "first": "Ella", "middle": [], "last": "Rabinovich", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Sznajder", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Aharoni", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference Companion on World Wide Web, WWW '16 Companion", "volume": "", "issue": "", "pages": "991--996", "other_ids": { "DOI": [ "10.1145/2872518.2891115" ] }, "num": null, "urls": [], "raw_text": "Haggai Roitman, Shay Hummel, Ella Rabinovich, Ben- jamin Sznajder, Noam Slonim, and Ehud Aharoni. 2016. On the retrieval of wikipedia articles contain- ing claims on controversial topics. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW '16 Companion, page 991-996, Republic and Canton of Geneva, CHE. In- ternational World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Georgios Petasis, and Vangelis Karkaletsis", "authors": [ { "first": "Christos", "middle": [], "last": "Sardianos", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2nd Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "56--66", "other_ids": { "DOI": [ "10.3115/v1/W15-0508" ] }, "num": null, "urls": [], "raw_text": "Christos Sardianos, Ioannis Manousos Katakis, Geor- gios Petasis, and Vangelis Karkaletsis. 2015. Argu- ment extraction from news. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 56- 66, Denver, CO. Association for Computational Lin- guistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Social media mining for toxicovigilance: automatic monitoring of prescription medication abuse from Twitter", "authors": [ { "first": "Abeed", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "O'", "middle": [], "last": "Karen", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Ginn", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Scotch", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Graciela", "middle": [], "last": "Malone", "suffix": "" }, { "first": "", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2016, "venue": "Drug safety", "volume": "39", "issue": "3", "pages": "231--240", "other_ids": { "DOI": [ "10.1007/s40264-015-0379-4" ] }, "num": null, "urls": [], "raw_text": "Abeed Sarker, Karen O'Connor, Rachel Ginn, Matthew Scotch, Karen Smith, Dan Malone, and Graciela Gonzalez. 2016. Social media mining for toxicovig- ilance: automatic monitoring of prescription medi- cation abuse from Twitter. Drug safety, 39(3):231- 240.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "From witch's shot to music making bonesresources for medical laymen to technical language and vice versa", "authors": [ { "first": "Laura", "middle": [], "last": "Seiffe", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Marten", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mikhailov", "suffix": "" }, { "first": "Sven", "middle": [], "last": "Schmeier", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "M\u00f6ller", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Roller", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "6185--6192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Seiffe, Oliver Marten, Michael Mikhailov, Sven Schmeier, Sebastian M\u00f6ller, and Roland Roller. 2020. From witch's shot to music making bones - resources for medical laymen to technical language and vice versa. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 6185-6192, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "HClaimE: A tool for identifying health claims in health news headlines", "authors": [ { "first": "Yuan", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Bei", "suffix": "" } ], "year": 2019, "venue": "formation Processing & Management", "volume": "56", "issue": "", "pages": "1220--1233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Shi and Yu Bei. 2019. HClaimE: A tool for iden- tifying health claims in health news headlines. In- formation Processing & Management, 56(4):1220- 1233.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Social media argumentation mining: the quest for deliberateness in raucousness", "authors": [], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.00168" ] }, "num": null, "urls": [], "raw_text": "Jan \u0160najder. 2016. Social media argumentation mining: the quest for deliberateness in raucousness. arXiv preprint arXiv:1701.00168.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Using Neural Networks for Relation Extraction from Biomedical Literature", "authors": [ { "first": "Diana", "middle": [], "last": "Sousa", "suffix": "" }, { "first": "Andre", "middle": [], "last": "Lamurias", "suffix": "" }, { "first": "Francisco", "middle": [ "M" ], "last": "Couto", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "289--305", "other_ids": { "DOI": [ "10.1007/978-1-0716-0826-5_14" ] }, "num": null, "urls": [], "raw_text": "Diana Sousa, Andre Lamurias, and Francisco M. Couto. 2021. Using Neural Networks for Relation Ex- traction from Biomedical Literature, pages 289-305. Springer US, New York, NY.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Parsing argumentation structures in persuasive essays", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "3", "pages": "619--659", "other_ids": { "DOI": [ "10.1162/COLI_a_00295" ] }, "num": null, "urls": [], "raw_text": "Christian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43(3):619-659.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Crosstopic argument mining from heterogeneous sources", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Rai", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3664--3674", "other_ids": { "DOI": [ "10.18653/v1/D18-1402" ] }, "num": null, "urls": [], "raw_text": "Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross- topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3664-3674, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Finding potentially unsafe nutritional supplements from user reviews with topic modeling", "authors": [ { "first": "Ryan", "middle": [], "last": "Sullivan", "suffix": "" }, { "first": "Abeed", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "O'", "middle": [], "last": "Karen", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Goodin", "suffix": "" }, { "first": "Graciela", "middle": [], "last": "Karlsrud", "suffix": "" }, { "first": "", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2016, "venue": "Biocomputing", "volume": "", "issue": "", "pages": "528--539", "other_ids": { "DOI": [ "10.1142/9789814749411_0048" ] }, "num": null, "urls": [], "raw_text": "Ryan Sullivan, Abeed Sarker, Karen O'Connor, Amanda Goodin, Mark Karlsrud, and Graciela Gon- zalez. 2016. Finding potentially unsafe nutritional supplements from user reviews with topic model- ing. In Biocomputing 2016, pages 528-539, Kohala Coast, Hawaii, USA.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "On the semantic similarity of disease mentions in medline and twitter", "authors": [ { "first": "Camilo", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2018, "venue": "Natural Language Processing and Information Systems: 23rd International Conference on Applications of Natural Language to Information Systems, NLDB 2018", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-319-91947-8_34" ] }, "num": null, "urls": [], "raw_text": "Camilo Thorne and Roman Klinger. 2018. On the se- mantic similarity of disease mentions in medline and twitter. In Natural Language Processing and In- formation Systems: 23rd International Conference on Applications of Natural Language to Informa- tion Systems, NLDB 2018, Paris, France, June 13- 15, 2018, Proceedings, Cham. Springer International Publishing.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "(b) Overview of fine-tuning hyper-parameters for the binary (BERT2) and multi-class (BERT3) models used in our experiments", "authors": [ { "first": "David", "middle": [], "last": "Wadden", "suffix": "" }, { "first": "Shanchuan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Lucy", "middle": [ "Lu" ], "last": "Wang", "suffix": "" }, { "first": "Madeleine", "middle": [], "last": "Van Zuylen", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7534--7550", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.609" ] }, "num": null, "urls": [], "raw_text": "David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or Fiction: Verify- ing Scientific Claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 7534-7550, On- line. Association for Computational Linguistics. (b) Overview of fine-tuning hyper-parameters for the binary (BERT2) and multi-class (BERT3) models used in our experi- ments.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Overview of model hyper-parameters", "authors": [], "year": null, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Table 9: Overview of model hyper-parameters.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Social media analysis for e-health and medical purposes", "authors": [ { "first": "Katarzyna", "middle": [], "last": "Wegrzyn-Wolska", "suffix": "" }, { "first": "Lamine", "middle": [], "last": "Bougueroua", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Dziczkowski", "suffix": "" } ], "year": 2011, "venue": "2011 International Conference on Computational Aspects of Social Networks (CASoN)", "volume": "", "issue": "", "pages": "278--283", "other_ids": { "DOI": [ "10.1109/CASON.2011.6085958" ] }, "num": null, "urls": [], "raw_text": "Katarzyna Wegrzyn-Wolska, Lamine Bougueroua, and Grzegorz Dziczkowski. 2011. Social media analysis for e-health and medical purposes. In 2011 Interna- tional Conference on Computational Aspects of So- cial Networks (CASoN), pages 278-283.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Mining health social media with sentiment analysis", "authors": [ { "first": "", "middle": [], "last": "Fu-Chen", "suffix": "" }, { "first": "Anthony", "middle": [ "J T" ], "last": "Yang", "suffix": "" }, { "first": "Sz-Chen", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Kuo", "suffix": "" } ], "year": 2016, "venue": "Journal of Medical Systems", "volume": "40", "issue": "11", "pages": "", "other_ids": { "DOI": [ "10.1007/s10916-016-0604-4" ] }, "num": null, "urls": [], "raw_text": "Fu-Chen Yang, Anthony J.T. Lee, and Sz-Chen Kuo. 2016. Mining health social media with sentiment analysis. Journal of Medical Systems, 40(11):236.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "A scalable framework to detect personal health mentions on Twitter", "authors": [ { "first": "Zhijun", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Fabbri", "suffix": "" }, { "first": "Bradley", "middle": [], "last": "Trent Rosenbloom", "suffix": "" }, { "first": "", "middle": [], "last": "Malin", "suffix": "" } ], "year": 2015, "venue": "Journal of Medical Internet Research", "volume": "17", "issue": "6", "pages": "", "other_ids": { "DOI": [ "10.2196/jmir.4305" ] }, "num": null, "urls": [], "raw_text": "Zhijun Yin, Daniel Fabbri, S Trent Rosenbloom, and Bradley Malin. 2015. A scalable framework to de- tect personal health mentions on Twitter. Journal of Medical Internet Research, 17(6):e138.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Tweet with a biomedical claim (highlighted).", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Examples of the four categories of search terms used to retrieve tweets about COVID-19, the measles, cystic fibrosis, and depression via the Twitter API.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "Inter-annotator agreement during development of the annotation guidelines and for the final corpus.", "content": "
C/N: Claim/non-claim, E/I/N: Explicit/Implicit/Non-
claim, Span: Token-level annotation of the explicit
claim expression.
", "type_str": "table", "num": null, "html": null }, "TABREF5": { "text": "Distribution of the annotated classes and average instance lengths (in tokens).", "content": "
incompl.blendedanecdotalimpl.
M8 .16 14 .289.18 14 .28
C17 .34 15 .308.16 14 .28
CF12 .24 10 .20 26.52 18 .36
D16 .329 .18 23.46 11 .22
total 53 .27 48 .24 66.33 57 .29
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF7": { "text": "presents corpus statistics. Out of the 1200 documents in the corpus, 537 instances (44.75 %) contain a claim and 663 (55.25 %) do not. From all claim instances, 370 tweets are explicit (68 %). The claims are not equally distributed across topics (not shown in table): 61 % of measle-related tweets contain a claim, 49 % of those related to COVID-19, 40 % of cystic fibrosis tweets, and 29 % for depression.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF8": { "text": "", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF10": { "text": "", "content": "
: Examples of incorrect predictions by the LG
model in the binary setting (G:Gold, P:Predictions; n:
no claim; c: claim).
", "type_str": "table", "num": null, "html": null } } } }