{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:51.168727Z" }, "title": "Tackling Fake News Detection by Interactively Learning Representations using Graph Neural Networks", "authors": [ { "first": "Nikhil", "middle": [], "last": "Mehta", "suffix": "", "affiliation": { "laboratory": "", "institution": "Purdue University", "location": { "settlement": "West Lafayette", "region": "IN" } }, "email": "mehta52@purdue.edu" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "", "affiliation": { "laboratory": "", "institution": "Purdue University", "location": { "settlement": "West Lafayette", "region": "IN" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Easy access, variety of content, and fast widespread interactions are some of the reasons that have made social media increasingly popular in today's society. However, this has also enabled the widespread propagation of fake news, text that is published with an intent to spread misinformation and sway beliefs. Detecting fake news is important to prevent misinformation and maintain a healthy society. While prior works have tackled this problem by building supervised learning systems, automatedly modeling the social media landscape that enables the spread of fake news is challenging. On the contrary, having humans fact check all news is not scalable. Thus, in this paper, we propose to approach this problem interactively, where human insight can be continually combined with an automated system, enabling better social media representation quality. Our experiments show performance improvements in this setting.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Easy access, variety of content, and fast widespread interactions are some of the reasons that have made social media increasingly popular in today's society. However, this has also enabled the widespread propagation of fake news, text that is published with an intent to spread misinformation and sway beliefs. Detecting fake news is important to prevent misinformation and maintain a healthy society. While prior works have tackled this problem by building supervised learning systems, automatedly modeling the social media landscape that enables the spread of fake news is challenging. On the contrary, having humans fact check all news is not scalable. Thus, in this paper, we propose to approach this problem interactively, where human insight can be continually combined with an automated system, enabling better social media representation quality. Our experiments show performance improvements in this setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the last decade, an increasing number of people access news online (Amy Mitchell, 2016), often using social networking platforms to engage, consume and propagate this content in their social circles. Social networks provide easy means to distribute news and commentary, resulting in a sharp increase in the number of media outlets (Ribeiro et al., 2018) , and a rapid spread of content. In particular, false news stories tend to spread at lightning speeds, and due to the volume, cannot be checked manually. An alternative to fact-checking claims, which is arguably easier to scale, is to focus on their source, and ask who can you trust?", "cite_spans": [ { "start": 336, "end": 358, "text": "(Ribeiro et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior works have formulated this as a traditional classification problem using techniques such as feature-based SVM's (Baly et al., 2018 (Baly et al., , 2020 , and more recently Graph Neural Networks (GNNs) (Li and Goldwasser, 2019; Shu et al., 2019; Han et al., 2020; Nguyen et al., 2020) , which create a better representation of social media interactions. Graphs often consist of nodes corresponding to news sources (associated with a discrete factuality level -high, low, or mixed), the articles they release, and their social context, corresponding to social media users engaging and sharing information in their networks. GNNs can utilize this information by using edge interactions to create node representations contextualized by their graph neighbours. This leads to a stronger representation of the complex information landscape on social media that enables fake news to spread, allowing it to be better detected. For this reason, we adopt graphs as our automated framework (1).", "cite_spans": [ { "start": 118, "end": 136, "text": "(Baly et al., 2018", "ref_id": "BIBREF2" }, { "start": 137, "end": 157, "text": "(Baly et al., , 2020", "ref_id": "BIBREF3" }, { "start": 207, "end": 232, "text": "(Li and Goldwasser, 2019;", "ref_id": "BIBREF6" }, { "start": 233, "end": 250, "text": "Shu et al., 2019;", "ref_id": "BIBREF17" }, { "start": 251, "end": 268, "text": "Han et al., 2020;", "ref_id": "BIBREF5" }, { "start": 269, "end": 289, "text": "Nguyen et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite the success of these works, fake news detection is still a challenging research problem and human performance is significantly higher than fully automated systems (Shaar et al., 2020) . Clearly, having humans fact check every information source is not scalable. Thus, our goal in this paper is to explore a different form of interaction with humans, where they can provide advice (Mehta and Goldwasser, 2019) to the automated system. Advice corresponds to localized judgements (provided through natural language) that help characterize the content and social interactions associated with sources. These judgements, associated with article and social media user nodes, are then propagated through the information graph using the GNN, allowing the system to take advantage of it to improve it's representation. As advice is not providing source labels directly, which is a timeconsuming process requiring a global view of the source's interactions, it is scalable.", "cite_spans": [ { "start": 171, "end": 191, "text": "(Shaar et al., 2020)", "ref_id": "BIBREF16" }, { "start": 388, "end": 416, "text": "(Mehta and Goldwasser, 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For example, one challenging aspect of the problem is that low-factuality (\"fake news\") sources may not always propagate false information (some of the articles they publish may be factual), and Advice is added to the information graph by adding new nodes/edges(teal) based on the advice type (news spreader or relevant claims). Advice then provides information that can useful to clear up the complex social space the graph is modeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "vice-versa (leading to model confusion). Human interaction, in the form of advice, can help clean up some of this uncertainty, by identifying claims containing egregious falsehoods. The model could then use this information and trust sources making these claims less. We refer to this form of advice, mapping a specific article to known falsehoods as relevant claim advice. In another case, referred to as news spreader advice, a human could inform the system that a user that is spreading a sources' articles frequently spreads lies, which would increase the likelihood that that source and any other source this user spreads articles from are fake. Fig 1 shows how both of these advice types can be seamlessly added to an information graph.", "cite_spans": [], "ref_spans": [ { "start": 651, "end": 662, "text": "Fig 1 shows", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we show that our protocol in which humans iteratively provide these types of advice by interacting with the model (even after it is trained) improves overall fake news detection performance. In summary, we formulate fake news source detection as a reasoning problem over an information graph. We then suggest an interactive learning based approach for incorporating human knowledge as advice to clean up uncertain graph decisions, which allows us to better learn and reason on this graph. Finally, we perform experiments that demonstrate that this setup leads to performance improvements on fake news source detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We start with defining our social context information graph. It consists of sources (S), articles they publish (A), and Twitter users that interact with sources/articles (U ). Our goal is fake news source factuality classification. Each node in the graph is represented by a high dimensional feature vector, (similar to prior work (Baly et al., 2018 (Baly et al., , 2020 Nguyen et al., 2020) ) to provide knowledge to the model that can be utilized when learning the graph embedding. Source and user feature vectors are created by concatenating embeddings based on their Twitter profiles (SBERT + features, details in Appendix A.2.1). Sources also can include YouTube profile embeddings. Articles are represented by the encoding text into a SBERT RoBERTa embedding.", "cite_spans": [ { "start": 331, "end": 349, "text": "(Baly et al., 2018", "ref_id": "BIBREF2" }, { "start": 350, "end": 370, "text": "(Baly et al., , 2020", "ref_id": "BIBREF3" }, { "start": 371, "end": 391, "text": "Nguyen et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Creation and Training", "sec_num": "2.1" }, { "text": "Our graph is formed by first adding all the sources as individual nodes. We then scrape and add up to 300 articles (a i ) for each source, connecting each with an edge to the source that published it (e = {s i , a j }). Next, we add social context to the graph via Twitter users that interact with sources. We add up to 5000 users that follow sources, and users that tweet links to any articles in the graph within a 3 month period of the article being published (e = {s i , u j }, e = {a i , u j }). Users that follow/engage with sources are likely to be aligned with/propagating the view of the sources, and modeling this can be useful. Finally, in order to capture the social interactions between users in the graph, which is critical to capturing fake news propagation on social media, we scrape up to 5000 followers of each Twitter user and make an edge between a pair of existing users if one user follows another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Creation and Training", "sec_num": "2.1" }, { "text": "In order to learn the information captured by our information graph, we train a GNN to learn an initial embedding, on top of which we will apply the interactive protocols (Sec 2.2). As a node embedding function, we utilize Relational Graph Convolutional Networks (R-GCN) (Schlichtkrull et al., 2018) (they can handle the social media relationships well). We achieve meaningful representations and capture factuality of the different nodes in our graph by optimizing the Node Classification (NC) objective of Fake News Detection. After obtaining the source representations o s from the R-GCN, we pass them through the softmax activation function \u03c3 and then train using categorical cross-entropy loss:", "cite_spans": [ { "start": 271, "end": 299, "text": "(Schlichtkrull et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Creation and Training", "sec_num": "2.1" }, { "text": "L nc = \u2212 C i=1 y i log(\u03c3(o s ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Creation and Training", "sec_num": "2.1" }, { "text": "where the C classes for y i are either high, mixed, or low factuality, and s is the current source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Creation and Training", "sec_num": "2.1" }, { "text": "We now describe the two advice protocols we utilize in this paper. As mentioned in the introduction, in this work, we define advice as a form of human provided judgement (typically provided through natural language) about intermediate relationships in the information graph, that cleans up the space of complex judgements made by the GNN, allowing us to better capture the challenging landscape on social media that enables fake news to spread ( Fig 1) . Advice is provided by humans interactively and continuously, so that the process is scalable (not many judgements are needed, and they can always be provided, even after the system is deployed). In this way, our advice protocols provide a mechanism for humans to interact with the automated graph system. We use two forms of advice:", "cite_spans": [], "ref_spans": [ { "start": 444, "end": 452, "text": "( Fig 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Advice Protocols", "sec_num": "2.2" }, { "text": "When a human provides relevant claim advice, they have some prior knowledge about a certain claim (or news statement), and are telling this information (the claim and what their belief about its' factuality is) to the model . For example, a human may know that a certain claim is not factual (perhaps many users on social media spread it and thus the human has seen it before). The human would then provide this claim and a message about its factuality through natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relevant Claim Advice", "sec_num": "2.2.1" }, { "text": "Once a human has provided advice in the form of a claim that may be relevant, the model must decide which articles (if any) the claim is relevant for. Once it does so, it can add a new node in the graph for the claim (represented similar to the article text node with SBERT RoBERTa embedding), and connect it to the relevant article(s), allowing the advice knowledge to easily propagate through the graph (either by re-training the GNN or using the trained GNN to embed the advice node appropiately \u2192 we evaluate both setups in Sec 3). This automated setup allows for minimal effort needed from the human, making the advice simple to provide.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relevant Claim Advice", "sec_num": "2.2.1" }, { "text": "To do this, first, the model filters a subset of sources (a process which we call filtering), whose articles could be candidates to receive advice. As mentioned earlier, advice cleans up complexities in the information graph, so these sources are ones which the model predicts the label of with low confidence (we rank Softmax scores for this). Then, for each filtered source's article, the model decides if the claim provided by the human is relevant, by analyzing content in two ways. (1) First, a heuristic is used to determine if the advice and article are talking about the same event. To do this, the model extracts the entities from the advice claim and the article (we use the FLAIR tagger (Akbik et al., 2019) ), and determines if any of them overlap. If they do, the model also checks the date the advice claim was made, and makes sure it is within a one week period of the article being published.", "cite_spans": [ { "start": 698, "end": 718, "text": "(Akbik et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Relevant Claim Advice", "sec_num": "2.2.1" }, { "text": "(2) Then, to further check content relevance, we use an entailment model (Parikh et al., 2016) and a sentence selection model (Nie et al., 2019) to check if any sentences from the article (chosen by the sentence selection model) entail the advice claim. If they do, the chance that the two are talking about similar content is higher. If there is an entailment, the advice statement d node is connected to the article a with an edge. All advice is also connected to a special label node h, m, or l, representing 'high', 'low', or 'mixed' factuality, based on the advice label (which is provided by the human), so that the model can easily represent that information.", "cite_spans": [ { "start": 73, "end": 94, "text": "(Parikh et al., 2016)", "ref_id": "BIBREF11" }, { "start": 126, "end": 144, "text": "(Nie et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Relevant Claim Advice", "sec_num": "2.2.1" }, { "text": "In our interactive process, which we evaluate in Sec 3.2, a human can continuously provide relevant claims (through natural language) based on knowledge they posses as advice, and through the process described above, the model can determine which articles to use it for (thus connecting the advice in the graph). In this way, the human interacts with the system to clear up potential confusion about certain articles, which propagates via the graph through sources and users, to lead to better fake news detection performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relevant Claim Advice", "sec_num": "2.2.1" }, { "text": "When providing news spreader advice, the human informs the system that a certain user is a bad actor, meaning that they frequently spread lies. This knowledge would increase the likelihood that articles this user tweets, and other users they interact with, are also non-factual. The user is then connected via an edge to a special 'low' factuality node, signifying to the model the set of users that are deemed to not be trusted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "News Spreader Advice", "sec_num": "2.2.2" }, { "text": "In this preliminary work, we simulate the two previous forms of human provided advice by collecting data from fact-checking websites (PolitiFact, Snopes, USA Today, The Washington Post) and Twitter (details in Appendix A.1). For relevant", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simulating Advice", "sec_num": "2.2.3" }, { "text": "Macro F1 # of Advice M1 : Majority class 52.43 22.93 -M2 : Best Model from (Baly et al., 2020) 71.52 67.25 -M3 : Our replication of (Baly et al., 2020) 69 claim advice, we scrape all claims fact-checked by these websites and their factuality scores, and use that. This simulates humans providing advice in the real world, as a claim and some factuality insight about it are given. For news spreader advice, we use the Twitter API to determine all users that have been suspended since we initially collected our dataset, and use them as our news spreaders. Twitter manually suspended most of these users after the storming of the US capitol, so using this data allows us to accurately simulate human advice. Although in this work we did not explicitly ask users to provide advice based on our learned graph model, our approximation of human advice that we collected was provided by human experts, and is thus relatively close to real advice that a human could provide. Relevant claims advice is based on real news claims that experts have associated factuality labels with, and Twitter manually suspended the users we used for news spreader advice.", "cite_spans": [ { "start": 75, "end": 94, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" }, { "start": 132, "end": 151, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Model Performance Acc", "sec_num": null }, { "text": "To evaluate our model's ability to predict the factuality of news medium, we used the Media Bias/Fact Check (MBFC) dataset (Baly et al., 2018 (Baly et al., , 2020 (859 sources, each labeled on a 3-point scale based on their factuality: low, mixed, and high). We provide graph statistics in App. A. Table 1 shows our results. We average our models on all 5 data splits released by (Baly et al., 2020) , using 20% of the training set sources as a development set, and report results on accuracy and Macro F1-score for fake news source classification. We compare our advice protocol models to the baseline-graph based model trained only on node classification (NC -no advice provided, M4). For completeness, we included the results of the SOTA (Baly et al., 2020) (M2), as well as replication of their setup using the data we scraped (and their code). Our results are worse than their released performance, so we hypothesize that their data on our setup may lead to better overall performance.", "cite_spans": [ { "start": 123, "end": 141, "text": "(Baly et al., 2018", "ref_id": "BIBREF2" }, { "start": 142, "end": 162, "text": "(Baly et al., , 2020", "ref_id": "BIBREF3" }, { "start": 380, "end": 399, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Dataset and Collection", "sec_num": "3.1" }, { "text": "For relevant claim advice, we evaluate settings in which we provide all the advice we scraped (29,673 statements -M5), where we provide advice only to the bottom 25% of sources that our model is not confident on during train/dev/test time (M6, all advice that passes the event filter is used \u2192 at least one entity in the articles title matches the advice claim and the dates are within one week of each other), and where we provide advice to all sources and make sure articles pass the event + entailment + sentence selection criteria (M7 -full setup in Sec 2.2.1). In all these setups, the advice is provided on the best model in M4, and then parameters are reset and the model is re-trained to learn how to incorporate the advice. M8 is different and more interactive, as advice is first provided on the bottom 50% of confident sources based on the protocol in Sec 2.2.1, then the model is re-trained. Then, the rest of the advice is provided as in M7, except this time the model isn't retrained. This simulates advice being continuously provided interactively by the human in the real world, and performance still improves. In this setting, as no re-training of the model is necessary, advice can be quickly utilized. All setups improve performance from the baseline, and using the filtering + sentence selection approach (M7) leads to the best performance, showing that the content of the advice matching the article matters. Thus, in the future when humans provide advice that is more likely to match the content of the articles, it is likely that we will see further performance improvements. Further, it is likely that less advice will need to be provided to see improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fake News Classification", "sec_num": "3.2" }, { "text": "For completeness, in M9 we also evaluate an upper bound, where advice provided by the human would be 100% accurate, i.e. the human would only provide advice that matched the article label (article label based on the label of the source).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fake News Classification", "sec_num": "3.2" }, { "text": "Finally, we evaluate news spreader advice, first when all news spreaders are told to the model (M10), then when only 50% are (M11), and finally when 50% are told, the model is retrained, and then the rest are provided (M12, simulating true interaction).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fake News Classification", "sec_num": "3.2" }, { "text": "In all advice models, performance improves from the NC baseline, showing that these types of advice can be helpful to the model. Furthermore, once the advice is provided, we can add more (M8, M12), and still see performance improvements without having to retrain the model, demonstrating a true interactive scenario, where a human can continuously be interacting with an automated system. In addition, providing advice as a localized judgement is simple and easier than labelling an entire source, so large amounts of advice can be collected from different experts to improve results. In the future, when we experiment with humans providing advice that is more content relevant (not simulating), the amount of advice needed could also decrease.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fake News Classification", "sec_num": "3.2" }, { "text": "In this section, we analyze a few specific cases of how relevant claim advice is used by the model to improve performance. In one case, an article from a news source labeled as spreading fake news was discussing how a Democratic leader would become Vice President if the President was impeached. Our model incorrectly predicted the factuality of this source. However, an advice claim from Snopes stating that the 25th amendment would not lead to this Democratic candidate immediately becoming Vice President was able to push the prediction of the source in the appropriate direction. In another case, advice that a specific former President was the first to speak against the current President was provided through PolitiFact with a False label, pushing a different source towards the fake news label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How Does Advice Help?", "sec_num": "3.3" }, { "text": "In this paper, we proposed an approach to tackle fake news detection interactively by designing a protocol for a graph based system to continuously solicit human advice, and take advantage of it to improve overall information quality, which enables better fake news detection performance. We showed the benefits of two forms of advice (relevant claims and news spreaders), provided either all at once or continuously. In the future, we plan to have humans actually provide this advice, and explore other advice types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "4" }, { "text": "We thank the anonymous reviewers of this paper for all of their vital feedback. This works was partially supported by an NSF CAREER award IIS-2048001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "5" }, { "text": "To the best of our knowledge no code of ethics was violated throughout the experiments done in this paper. We reported all hyper-parameters and other technical details necessary to reproduce our results. For space constraint we moved some of the technical details to the Appendix section which is submitted with this manuscript. The results we reported supports our claims in this paper and we believe it is reproducible. Any qualitative result we report is an outcome from a machine learning model that does not represent the authors' personal views. Any results that we discussed on the data we used did not include account information and all results are anonymous. We anonymized the Twitter, article, and advice (Politifact, Snopes, USA Today, The Washington Post) data we collected to respect the privacy policy of the various websites and user data. While our overall approach does rely on user insights, each advice statement provided does not directly affect the final prediction, so a system receiving advice for fake news detection can not be easily manipulated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics Statement", "sec_num": "6" }, { "text": "In this section, we provide implementation details for our models. The dataset we use has 859 sources: 452 high factuality, 245 mixed, and 162 low, and was released publicly by (Baly et al., 2020) 1 . The dataset does not include any other raw data (articles, sources, etc.), so we must scrape our own.", "cite_spans": [ { "start": 177, "end": 196, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A Supplemental Material", "sec_num": null }, { "text": "For each source, we attempted to scrape news articles using public libraries (Newspaper3K 2 , Scrapy 3 , and news-please 4 (Hamborg et al., 2017) ). In the cases where the web pages of the source news articles was removed, we used the Wayback Machine 5 . Overall, our sources have an average of 109 articles with a STD of 36.", "cite_spans": [ { "start": 123, "end": 145, "text": "(Hamborg et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "A.1 Data Collection", "sec_num": null }, { "text": "For Twitter users, we used the Twitter API 6 to scrape 5000 followers for each Twitter account we could find (72.5% of the sources, identical to (Baly et al., 2020) . In the graph, we then connected these users to the sources they follow. In addition, we used the Twitter Search API to search articles on Twitter and find any Tweets that mention the article title or URL within 3 months of the article being published. We then downloaded the users that make these Tweets as well, and added them to our graph, linking them to the respective article they talk about. Finally, to increase the connectivity of the graph and accurately capture the interactions between the users, we also scraped the followers of every Twitter user. We then made sure to only add users to our graph that either interact with multiple sources (through source or article connections) or another user, so that every node would be interconnected.", "cite_spans": [ { "start": 145, "end": 164, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A.1 Data Collection", "sec_num": null }, { "text": "We did not scrape YouTube accounts, but rather used the same ones as the ones released by (Baly et al., 2020) . They found YouTube channels for 49% of sources.", "cite_spans": [ { "start": 90, "end": 109, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A.1 Data Collection", "sec_num": null }, { "text": "For collecting relevant claim advice from news sources (PolitiFact, Snopes, USA Today, and the Washington Post), we used the Google FactCheck tool 7 , along with scraping the PolitiFact website. We downloaded 29,673 claims in total.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Data Collection", "sec_num": null }, { "text": "1 https://github.com/ramybaly/News-Media-Reliability 2 https://github.com/codelucas/newspaper 3 https://github.com/scrapy/scrapy 4 https://github.com/fhamborg/news-please 5 https://archive.org/web/ 6 https://developer.twitter.com/en/docs 7 https://toolbox.google.com/factcheck/explorer A.2 Experimental Settings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Data Collection", "sec_num": null }, { "text": "Our initial Twitter embedding for each source and engaging user was a 773 dimensional vector consisting the SBERT (Reimers and Gurevych, 2019) (RoBERTa (Liu et al., 2019) Base NLI model) representation of their bio concatenated with the following numerical features: a binary number representing whether the source is verified, the number users a source follows and the number that follow it, the number of tweets it makes, and the number of favorites/likes its' tweets have received. For, YouTube, the embedding we used was the average of the number of views, dislikes, and comments for each video the source posted. Sources that did not have a YouTube channel had a random YouTube embedding. For articles, we used the SBERT (Reimers and Gurevych, 2019) RoBERTa model to generate an embedding for each article. For relevant claims advice, we used the same SBERT (Reimers and Gurevych, 2019) RoBERTa model to generate an embedding for each advice claim.", "cite_spans": [ { "start": 114, "end": 126, "text": "(Reimers and", "ref_id": "BIBREF13" }, { "start": 127, "end": 170, "text": "Gurevych, 2019) (RoBERTa (Liu et al., 2019)", "ref_id": null }, { "start": 726, "end": 754, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF13" }, { "start": 863, "end": 891, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "A.2.1 Initial Embeddings", "sec_num": null }, { "text": "We also mentioned special factuality nodes in Sec 2.2.1 that are added into the graph and connected to advice claims, to allow the model to easily represent the advice label (either the claim label or the fact that a Twitter user is spreading bad news). These nodes are initialized randomly with a 768 dimensional embedding that is then learned when the graph is re-trained after the initial set of advice is added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2.1 Initial Embeddings", "sec_num": null }, { "text": "We downloaded an average of 109 articles per source, with a STD of 36, and user-engagements (talking about articles, following sources/other users) via the Twitter API 8 (sources have an average of 27 users directly connected to them or to their articles). Using this data we construct the graph as described in Sec 2.1, which consists of 69,978 users, 93,191 articles, 164,034 nodes, and 7,196,808 edges. Details about the model setup we utilized when training our graph (chosen using the development set), and our scraping protocol are in Appendix A.", "cite_spans": [ { "start": 339, "end": 398, "text": "69,978 users, 93,191 articles, 164,034 nodes, and 7,196,808", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Graph Statistics", "sec_num": null }, { "text": "Our models are built on top of PyTorch (Paszke et al., 2019) and DGL (Deep Graph Library) in Python. The R-GCN we use consists of 5 layers, 128 hidden units, a learning rate of 0.001, and a batch size of 128 for Node Classification. Our initial source, article, and advice embeddings have hidden dimension 768, while the user one has dimension 773.", "cite_spans": [ { "start": 39, "end": 60, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "A.3.1 Model Setup", "sec_num": null }, { "text": "We choose parameters using the development set (20% of train sources) for one of the training data splits, and then apply them uniformly across all the splits, when training the final models. We choose the stopping point for the best performing models on which to apply advice on top of based on the dev set. In the setups where we did not apply all the advice at once, we determined all the advice that could be relevant and then randomly chosen which ones to apply based on the percentage of the total the experiment required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3.1 Model Setup", "sec_num": null }, { "text": "Our models were trained on a 12GB TITAN XP GPU card and training each data split for Node Classification takes approximately 4 hours, while training Link Prediction Pre-training and the combined initialization step takes 24 hours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3.1 Model Setup", "sec_num": null }, { "text": "To replicate (Baly et al., 2020 ) (M3), we used their released code with our features. Specifically, we used our article, Twitter profile, Twitter Follower, and YouTube embeddings. This setup consists of all the data in our graph, and also provided the best performance in (Baly et al., 2020) .", "cite_spans": [ { "start": 13, "end": 31, "text": "(Baly et al., 2020", "ref_id": "BIBREF3" }, { "start": 273, "end": 292, "text": "(Baly et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A.3.2 Replication of Prior Work", "sec_num": null }, { "text": "https://developer.twitter.com/en/docs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Flair: An easy-to-use framework for state-of-the-art nlp", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Rasul", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "54--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. Flair: An easy-to-use framework for state-of-the-art nlp. In NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The modern news consumer", "authors": [], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Barthel Elisa Shearer Amy Mitchell, Jef- frey Gottfried. 2016. The modern news consumer. Pew Research Center.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Predicting factuality of reporting and bias of news media sources", "authors": [ { "first": "Ramy", "middle": [], "last": "Baly", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Karadzhov", "suffix": "" }, { "first": "Dimitar", "middle": [], "last": "Alexandrov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "18", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramy Baly, Georgi Karadzhov, Dimitar Alexandrov, James Glass, and Preslav Nakov. 2018. Predict- ing factuality of reporting and bias of news media sources. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '18, Brussels, Belgium.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "What was written vs. who read it: News media profiling using text analysis and social media context", "authors": [ { "first": "Ramy", "middle": [], "last": "Baly", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Karadzhov", "suffix": "" }, { "first": "Jisun", "middle": [], "last": "An", "suffix": "" }, { "first": "Haewoon", "middle": [], "last": "Kwak", "suffix": "" }, { "first": "Yoan", "middle": [], "last": "Dinkov", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Ali", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL '20", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramy Baly, Georgi Karadzhov, Jisun An, Haewoon Kwak, Yoan Dinkov, Ahmed Ali, James Glass, and Preslav Nakov. 2020. What was written vs. who read it: News media profiling using text analysis and social media context. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, ACL '20.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "news-please: A generic news crawler and extractor", "authors": [ { "first": "Felix", "middle": [], "last": "Hamborg", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Meuschke", "suffix": "" }, { "first": "Corinna", "middle": [], "last": "Breitinger", "suffix": "" }, { "first": "Bela", "middle": [], "last": "Gipp", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th International Symposium of Information Science", "volume": "", "issue": "", "pages": "218--223", "other_ids": { "DOI": [ "10.5281/zenodo.4120316" ] }, "num": null, "urls": [], "raw_text": "Felix Hamborg, Norman Meuschke, Corinna Bre- itinger, and Bela Gipp. 2017. news-please: A generic news crawler and extractor. In Proceedings of the 15th International Symposium of Information Science, pages 218-223.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Graph neural networks with continual learning for fake news detection from social media", "authors": [ { "first": "Yi", "middle": [], "last": "Han", "suffix": "" }, { "first": "Shanika", "middle": [], "last": "Karunasekera", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Leckie", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.03316" ] }, "num": null, "urls": [], "raw_text": "Yi Han, Shanika Karunasekera, and Christopher Leckie. 2020. Graph neural networks with continual learning for fake news detection from social media. arXiv preprint arXiv:2007.03316.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Encoding social information with graph convolutional networks forpolitical perspective detection in news media", "authors": [ { "first": "Chang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2594--2604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang Li and Dan Goldwasser. 2019. Encoding so- cial information with graph convolutional networks forpolitical perspective detection in news media. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2594- 2604.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving natural language interaction with robots using advice", "authors": [ { "first": "Nikhil", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1962--1967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Mehta and Dan Goldwasser. 2019. Improving natural language interaction with robots using ad- vice. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 1962-1967.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fang: Leveraging social context for fake news detection using graph representation", "authors": [ { "first": "Kazunari", "middle": [], "last": "Van-Hoang Nguyen", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Sugiyama", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 29th ACM International Conference on Information & Knowledge Management", "volume": "", "issue": "", "pages": "1165--1174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van-Hoang Nguyen, Kazunari Sugiyama, Preslav Nakov, and Min-Yen Kan. 2020. Fang: Leveraging social context for fake news detection using graph representation. In Proceedings of the 29th ACM International Conference on Information & Knowl- edge Management, pages 1165-1174.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Combining fact extraction and verification with neural semantic matching networks", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Haonan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6859--6866", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neu- ral semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6859-6866.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A decomposable attention model for natural language inference", "authors": [ { "first": "P", "middle": [], "last": "Ankur", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.01933" ] }, "num": null, "urls": [], "raw_text": "Ankur P Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10084" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Media bias monitor: Quantifying biases of social media news outlets at large-scale", "authors": [ { "first": "N", "middle": [], "last": "Filipe", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Ribeiro", "suffix": "" }, { "first": "Fabricio", "middle": [], "last": "Henrique", "suffix": "" }, { "first": "Abhijnan", "middle": [], "last": "Benevenuto", "suffix": "" }, { "first": "Juhi", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Mahmoudreza", "middle": [], "last": "Kulshrestha", "suffix": "" }, { "first": "Krishna", "middle": [ "P" ], "last": "Babaei", "suffix": "" }, { "first": "", "middle": [], "last": "Gummadi", "suffix": "" } ], "year": 2018, "venue": "Twelfth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filipe N Ribeiro, Lucas Henrique, Fabricio Ben- evenuto, Abhijnan Chakraborty, Juhi Kulshrestha, Mahmoudreza Babaei, and Krishna P Gummadi. 2018. Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Twelfth International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Modeling relational data with graph convolutional networks", "authors": [ { "first": "Michael", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "Rianne", "middle": [], "last": "Bloem", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Berg", "suffix": "" }, { "first": "Max", "middle": [], "last": "Titov", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2018, "venue": "European semantic web conference", "volume": "", "issue": "", "pages": "593--607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolu- tional networks. In European semantic web confer- ence, pages 593-607. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Overview of checkthat! 2020 english: Automatic identification and verification of claims in social media", "authors": [ { "first": "Shaden", "middle": [], "last": "Shaar", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Nikolov", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Babulkov", "suffix": "" }, { "first": "Firoj", "middle": [], "last": "Alam", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Barr\u00f3n-Cedeno", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "Maram", "middle": [], "last": "Hasanain", "suffix": "" }, { "first": "Reem", "middle": [], "last": "Suwaileh", "suffix": "" }, { "first": "Fatima", "middle": [], "last": "Haouari", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Da San", "suffix": "" }, { "first": "", "middle": [], "last": "Martino", "suffix": "" } ], "year": 2020, "venue": "", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaden Shaar, Alex Nikolov, Nikolay Babulkov, Firoj Alam, Alberto Barr\u00f3n-Cedeno, Tamer Elsayed, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Giovanni Da San Martino, et al. 2020. Overview of checkthat! 2020 english: Automatic identification and verification of claims in social media. Cappel- lato et al.[10].", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Beyond news contents: The role of social context for fake news detection", "authors": [ { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the twelfth ACM international conference on web search and data mining", "volume": "", "issue": "", "pages": "312--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Shu, Suhang Wang, and Huan Liu. 2019. Beyond news contents: The role of social context for fake news detection. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 312-320.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep graph library: A graphcentric, highly-performant package for graph neural networks", "authors": [ { "first": "Minjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Da", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Zihao", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Mufei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jinjing", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Lingfan", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Gai", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.01315" ] }, "num": null, "urls": [], "raw_text": "Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, et al. 2019. Deep graph library: A graph- centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Information Graph capturing interactions between news sources, articles, and engaging users.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "num": null, "content": "