{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:14:06.782062Z" }, "title": "ReINTEL Challenge 2020: Vietnamese Fake News Detection using Ensemble Model with PhoBERT embeddings", "authors": [ { "first": "Cao", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "VNG Corporation", "location": {} }, "email": "" }, { "first": "Nguyen", "middle": [], "last": "Hieu", "suffix": "", "affiliation": { "laboratory": "", "institution": "VNG Corporation", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Thuan", "suffix": "", "affiliation": { "laboratory": "", "institution": "VNG Corporation", "location": {} }, "email": "" }, { "first": "Vo", "middle": [], "last": "Quoc", "suffix": "", "affiliation": { "laboratory": "", "institution": "VNG Corporation", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Along with the increasing traffic of social networks in Vietnam in recent years, the number of unreliable news has also grown rapidly. As we make decisions based on the information we come across daily, fake news, depending on the severity of the matter, can lead to disastrous consequences. This paper presents our approach for the Fake News Detection on Social Network Sites (SNSs), using an ensemble method with linguistic features extracted using PhoBERT (Nguyen and Nguyen, 2020). Our method achieves AUC score of 0.9521 and got 1 st place on the private test at the 7 th International Workshop on Vietnamese Language and Speech Processing (VLSP). For reproducing the result, the code can be found at https://gitlab.com/thuan.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Along with the increasing traffic of social networks in Vietnam in recent years, the number of unreliable news has also grown rapidly. As we make decisions based on the information we come across daily, fake news, depending on the severity of the matter, can lead to disastrous consequences. This paper presents our approach for the Fake News Detection on Social Network Sites (SNSs), using an ensemble method with linguistic features extracted using PhoBERT (Nguyen and Nguyen, 2020). Our method achieves AUC score of 0.9521 and got 1 st place on the private test at the 7 th International Workshop on Vietnamese Language and Speech Processing (VLSP). For reproducing the result, the code can be found at https://gitlab.com/thuan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Social network sites have become a very influential part of Vietnamese people's daily life. We use them to connect with each other, and get access to the latest information. However, such advances in large scale communication also bring their problems, one of which is fake news. It can be seen as information which is altered, manipulated, misguiding users to achieve personal gains, such as increase advertisement interaction, political power gain, or even terrorism. Without proper censoring, they can spread fear in the public community, causing panic and invoking violence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Due to such dire consequences, a lot of researches have been done to prevent this type of harmful information. However, there has been little effort put in for the Vietnamese language. This is a challenging task, due to a lack of quality humanverified data, and the difficult nature of the fake contents. Fake news may have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Similar contents to the real ones, however some key information is twisted (figures, celebrities, locations, ...) in order to capture the attention of readers.", "cite_spans": [ { "start": 77, "end": 115, "text": "(figures, celebrities, locations, ...)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Contents encapsulated inside images, which requires human verification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Special slangs, acronyms, misspellings which makes it difficult for machine to automate the process", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Unseen information that can take times before it is verified, which then might be too late", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present our approach to the problem of fake news detection presented at the VLSP 2020, shared-task Reliable Intelligence Identification on Vietnamese SNSs (ReINTEL) (Le et al., 2020) . We experimented with 3 types of features: the time the news is posted, the community interaction to its (through number of share, like, comment) and, most importantly, the content of the news. After much preprocessing and exploration had been done, we combined the strength of basic handcrafted linguistic cues in the training data with term frequency encoding (TF-IDF) and PhoBERT as context embedding. These features are combined and used as input for an ensemble model using StackNet 1 . Our model achieved the AUC score of 0.9521, ranked first place on the private leader board of ReINTEL.", "cite_spans": [ { "start": 183, "end": 200, "text": "(Le et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We discuss related work and previous approaches in section 2. We then describe our method workflow in section 3, starting with data cleaning and preprocessing, how we extracted the features we used, and the ensemble of models for our final result. Experiment's results and detailed description of parameters are shown in section 4. We conclude our report and discuss what could be improved in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the linguistic-based features, some approaches focus on extract special discriminative features such as acronymns, pronoun, special characters (Shu et al., 2017; Gupta et al., 2014) . However, these features are not well understood, as well as require extensive labour for validation and can be domain specific. Ruchansky et al. extend the method by using doc2vec embeddings, which learn semantic representation of the posts. Recent advancement in Natural Language Processing, and most importantly BERT (Devlin et al., 2018) , has helped to advance the research on this topic. Bhatt et al. combine the context generated by using LSTM and CNN, in combination with statistically handcrafted features to perform the final prediction.The work by Yang et al. use a combination of multiple Recurrent Neural Network (RNN) architectures as a natural language inference (NLI) mechanism, combining with BERT to make the final prediction. Research done by Huang and Chen focuses more on ensembling multiple deep learning architectures to achieve State Of The Art result for Fake News Detection. Ahmad et al. also shows that ensembling methods help achieve better performance on the current task.", "cite_spans": [ { "start": 147, "end": 165, "text": "(Shu et al., 2017;", "ref_id": "BIBREF8" }, { "start": 166, "end": 185, "text": "Gupta et al., 2014)", "ref_id": "BIBREF3" }, { "start": 507, "end": 528, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related works", "sec_num": "2" }, { "text": "In this section, we will describe our approach to solve the problem. Linguistic features extracted with PhoBERT and tf-idf, in conjunction with metadata provided, are used as input to an ensemble of models to achieve the best result in the private dataset. Using models that don't require much computation power not only helps us to tune each model quickly, but also enable us to analyze the impact of each feature on the fake news detection problem as a whole.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "To extract valuable features, we started with some preprocessing steps, which is described as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "1. Convert numeric-like features to numeric type if possible, null value otherwise;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "2. Remove rows having null or empty content;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "3. Deduplicated rows having the same content and interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "The first step were applied on both training and test set, while the remain ones were done only on training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "We considered all features except the content of the posts are metadata features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metadata", "sec_num": "3.2.1" }, { "text": "Number of likes, comments, and shares: We first transformed these 3 features to log scale for normalization. Then for each of them, a is_null feature were generated, equaling to 0 if the corresponding value is presented, and 1 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metadata", "sec_num": "3.2.1" }, { "text": "We extracted the hour and the day of week from the timestamp of posts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Timestamp of posts:", "sec_num": null }, { "text": "Combinations: We tried to generate some combinations of the above numeric features. Particularly, we computed the divisions of the number of likes, comments, and shartes to each other and obtained 3 new numeric features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Timestamp of posts:", "sec_num": null }, { "text": "Finally, any not-a-number value was filled by -1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Timestamp of posts:", "sec_num": null }, { "text": "Term Frequency -Inverse Document Frequency (TF-IDF): TF-IDF is a simple but strong feature extraction technique for text data. We fitted a TF-IDF vectorizer from 1-gram to 3-gram on post contents of our training data, followed by a Single Value Decomposition (SVD) model to reduce the dimension of transformed TF-IDF features. A 300dimensional vector of latent features was obtained for each post at the end of this step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post content", "sec_num": "3.2.2" }, { "text": "PhoBERT Embedding: BERT (Devlin et al., 2018) is a robust language model recently boosting many NLP tasks to a new level of achievement.", "cite_spans": [ { "start": 24, "end": 45, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Post content", "sec_num": "3.2.2" }, { "text": "PhoBERT (Nguyen and Nguyen, 2020), in our knowledge, is the best pre-trained BERT model for Vietnamese. In our solution, we leveraged PhoBERT to extract document embeddings from the posts. Notably, to receive more meaningful contextual embedding, some cleaning operations were applied to the contents before feeding into PhoBERT, consisting of word tokenization, special characters removal, redundant content removal. Moreover, another SVD model was fitted on top of those embedding to map 768-d output vectors of the BERT model to 100-dimensional space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post content", "sec_num": "3.2.2" }, { "text": "Characters Counting: After extensive exploratory analysis, it turned out that the occurrence of some special characters and patterns have impact on the performance of our model, such as question mark, exclamation mark, triple dot, link, and so on. Thus, we created a list of those characters and created corresponding features which present the number of each of them in the posts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post content", "sec_num": "3.2.2" }, { "text": "Tree-based models are the first choice when dealing with tabular data, thanks to their strength in both predictability and explainability. Furthermore, ensemble learning, especially stacking, is a good way to prevent overfitting and improve the performance of the overall system. Pursuing these observations, we designed our modeling phase as an ensemble system including 25 different base models and 5 stacked models on top of them. Precisely, the base models are from 5 different kinds: 5 Random Forests, 5 LightGBM Gradient Boosting Trees (GDBTs), 5 CatBoost GDBTs, 5 shallow Neural Networks, and 5 Naive Bayes classifiers; and the stacked models are 5 CatBoost GDBTs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling", "sec_num": "3.3" }, { "text": "Training phase: we formulate our training data in a 5-folds cross-validation manner. In each fold, 5 different-kind models were trained. After these training finished, 5 probability vectors were predicted and treated as 5 features, combined with the original features to form a new training set to train the corresponding stacked model of that fold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling", "sec_num": "3.3" }, { "text": "Inference phase: probabilities from 5 trained stacked models are averaged to get final scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling", "sec_num": "3.3" }, { "text": "We evaluated our methods on the datasets provided by the 2020 VLSP competition, which contain totally about 6000 training and 2000 testing examples, divided into multiple sets described in table 1. The manually annotated labels equal to 1 if the news as potentially unreliable, and 0 otherwise. Our training set is composed of the public training and the warmup training set. Table 2 is a statistic summarization of our training set. After the feature engineering steps, our final training set consisted of 420 features and 4956 examples, 831 (16.8%) of which are label 1.", "cite_spans": [], "ref_spans": [ { "start": 376, "end": 383, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "It should be noted that, although only the 2 training sets contain labels, we still leveraged the con- tent of posts from all datasets except the private one to extract features described in section 3.2.2. This way of making full use of unlabeled data help the model generalize well and result in better performance. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "All steps were executed on the same machine with the following specs: 4 Intel Xeon CPUs 2.20GHz, 1 16GB RAM, and 1 Tesla T4 16GB GPU. The step that occupied the most amount of RAM (~10GB) is fitting SVD on vectorized TF-IDF features. Only the training step of ensemble model used all of CPU cores, the others only used one core at a time. GPU was only used for extracting document embeddings from PhoBERT model. Table 4 summarizes approximate time of some time-consuming steps of the proposed method on our training set. We use Area Under the Curve (AUC) score as our evaluation metric and a 5-folds cross-validation scheme to evaluate our models. Though lots of experiments were made, we only shows the main versions that improve the performance significantly. All versions before ensemble were trained with a tuned CatBoost classifier. Comparison to top teams in the competition are shown in table 5. Our experiments were conducted as follow:", "cite_spans": [], "ref_spans": [ { "start": 412, "end": 419, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "\u2022 Version 1: no embedding, no combination features (described in section 3.2.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "\u2022 Version 2: add PhoBERT embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "\u2022 Version 3: add ensemble learning manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "\u2022 Version 4: add combination features", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "\u2022 Final version: leverage unlabeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.3" }, { "text": "We list out some remarkable insights that we discovered in this task:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 5.1 Summary", "sec_num": "5" }, { "text": "\u2022 Combining high-importance features is a good way of feature generation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 5.1 Summary", "sec_num": "5" }, { "text": "\u2022 TF-IDF should be applied on raw contents to capture their original form, while document embedding should be applied on cleaned ones to obtain contextual features. Table 5 : AUC scores of proposed method and other teams on different datasets.", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 172, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Conclusion 5.1 Summary", "sec_num": "5" }, { "text": "\u2022 The more the content the model learnt, the better the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 5.1 Summary", "sec_num": "5" }, { "text": "\u2022 Stacking with complementary bagging is very powerful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 5.1 Summary", "sec_num": "5" }, { "text": "Due to the time limit, a lot of methods we tried still need more validation and tuning, therefore were left out of the final submission. Other information, such as post images, can also give a boost in performance, due to the content is embedded in the images, or special information such as watermarks. Other Natural Language Processing features like sentiment of the comments, Part Of Speech tagging, bias, although tried, but haven't tuned carefully to produce good result, could be helpful. We also believe the URL, if provided, could also help improve the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "5.2" }, { "text": "A framework using stacked generalization to combine results of different models https://github.com/ kaz-Anova/StackNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fake news detection using machine learning ensemble methods", "authors": [ { "first": "Iftikhar", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Muhammad", "middle": [], "last": "Yousaf", "suffix": "" }, { "first": "Suhail", "middle": [], "last": "Yousaf", "suffix": "" }, { "first": "Muhammad Ovais", "middle": [], "last": "Ahmad", "suffix": "" } ], "year": 2020, "venue": "Complexity", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iftikhar Ahmad, Muhammad Yousaf, Suhail Yousaf, and Muhammad Ovais Ahmad. 2020. Fake news detection using machine learning ensemble methods. Complexity, 2020.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On the benefit of combining neural, statistical and external features for fake news identification", "authors": [ { "first": "Gaurav", "middle": [], "last": "Bhatt", "suffix": "" }, { "first": "Aman", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Shivam", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.03935" ] }, "num": null, "urls": [], "raw_text": "Gaurav Bhatt, Aman Sharma, Shivam Sharma, Ankush Nagpal, Balasubramanian Raman, and Ankush Mit- tal. 2017. On the benefit of combining neural, sta- tistical and external features for fake news identifica- tion. arXiv preprint arXiv:1712.03935.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Tweetcred: Realtime credibility assessment of content on twitter", "authors": [ { "first": "Aditi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Ponnurangam", "middle": [], "last": "Kumaraguru", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Castillo", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Meier", "suffix": "" } ], "year": 2014, "venue": "International Conference on Social Informatics", "volume": "", "issue": "", "pages": "228--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditi Gupta, Ponnurangam Kumaraguru, Carlos Castillo, and Patrick Meier. 2014. Tweetcred: Real- time credibility assessment of content on twitter. In International Conference on Social Informatics, pages 228-243. Springer.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Fake news detection using an ensemble learning model based on self-adaptive harmony search algorithms", "authors": [ { "first": "Yin-Fu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Po-Hong", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "Expert Systems with Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yin-Fu Huang and Po-Hong Chen. 2020. Fake news detection using an ensemble learning model based on self-adaptive harmony search algorithms. Expert Systems with Applications, page 113584.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Reintel: A multimodal data challenge for responsible information identification on social network sites", "authors": [ { "first": "Duc-Trong", "middle": [], "last": "Le", "suffix": "" }, { "first": "Xuan-Son", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Nhu-Dung", "middle": [], "last": "To", "suffix": "" }, { "first": "Huu-Quang", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Thuy-Trinh", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Linh", "middle": [], "last": "Le", "suffix": "" }, { "first": "Anh-Tuan", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Minh-Duc", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Nghia", "middle": [], "last": "Le", "suffix": "" }, { "first": "Huyen", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Hoang", "middle": [ "D" ], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duc-Trong Le, Xuan-Son Vu, Nhu-Dung To, Huu- Quang Nguyen, Thuy-Trinh Nguyen, Linh Le, Anh- Tuan Nguyen, Minh-Duc Hoang, Nghia Le, Huyen Nguyen, and Hoang D. Nguyen. 2020. Reintel: A multimodal data challenge for responsible informa- tion identification on social network sites.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "PhoBERT: Pre-trained language models for Vietnamese", "authors": [ { "first": "Anh", "middle": [ "Tuan" ], "last": "Dat Quoc Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1037--1042", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1037-1042.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Csi: A hybrid deep model for fake news detection", "authors": [ { "first": "Natali", "middle": [], "last": "Ruchansky", "suffix": "" }, { "first": "Sungyong", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "797--806", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 797-806.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fake news detection on social media: A data mining perspective", "authors": [ { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Sliva", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "ACM SIGKDD explorations newsletter", "volume": "19", "issue": "1", "pages": "22--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social me- dia: A data mining perspective. ACM SIGKDD ex- plorations newsletter, 19(1):22-36.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fake news detection as natural language inference", "authors": [ { "first": "Kai-Chou", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Niven", "suffix": "" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.07347" ] }, "num": null, "urls": [], "raw_text": "Kai-Chou Yang, Timothy Niven, and Hung-Yu Kao. 2019. Fake news detection as natural language in- ference. arXiv preprint arXiv:1907.07347.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "type_str": "table", "text": "Datasets.", "content": "
# rows5172
# label 1934
# user_name3706
# unique post_message4868
latest timestamp_postJan 2, 2014
nearest timestamp_post Sep 28, 2020
", "num": null }, "TABREF2": { "html": null, "type_str": "table", "text": "Statistic summarization of our training set.", "content": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "text": "Model hyper-parameters.", "content": "
", "num": null }, "TABREF5": { "html": null, "type_str": "table", "text": "", "content": "
shows the tuned hyper-parameters we used
for each model described in Section 3.3. All classi-
fiers except Naive Bayes used our predefined class
weights of 0.15 for class 0 and 0.75 for class 1.
", "num": null }, "TABREF6": { "html": null, "type_str": "table", "text": "Approx. run time of proposed method.", "content": "", "num": null } } } }