{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:14:07.248436Z" }, "title": "ReINTEL Challenge 2020: A Multimodal Ensemble Model for Detecting Unreliable Information on Vietnamese SNS", "authors": [ { "first": "Nguyen", "middle": [], "last": "Manh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyo Unversity", "location": { "country": "Japan" } }, "email": "" }, { "first": "Duc", "middle": [], "last": "Tuan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyo Unversity", "location": { "country": "Japan" } }, "email": "ductuan024@gmail.com" }, { "first": "Pham", "middle": [], "last": "Quang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Aimesoft JSC", "location": { "country": "Vietnam" } }, "email": "" }, { "first": "Nhat", "middle": [], "last": "Minh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Aimesoft JSC", "location": { "country": "Vietnam" } }, "email": "minhpham@aimesoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present our methods for unrealiable information identification task at ReIN-TEL Challenge 2020. The task is to classify a piece of information into reliable or unreliable category. We propose a novel multimodal ensemble model which combines two multimodal models to solve the task. In each multimodal model, we combined feature representations acquired from three different data types: texts, images, and metadata. Multimodal features are derived from three neural networks and fused for classification. Experimental results showed that our proposed ensemble model improved against single models in term of AUC score. We obtained 0.9445 AUC score on the private test of the challenge.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present our methods for unrealiable information identification task at ReIN-TEL Challenge 2020. The task is to classify a piece of information into reliable or unreliable category. We propose a novel multimodal ensemble model which combines two multimodal models to solve the task. In each multimodal model, we combined feature representations acquired from three different data types: texts, images, and metadata. Multimodal features are derived from three neural networks and fused for classification. Experimental results showed that our proposed ensemble model improved against single models in term of AUC score. We obtained 0.9445 AUC score on the private test of the challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recently, fake news detection have received much attention in both NLP and data mining research community. This year, for the first time, VLSP 2020 Evaluation Campaign held ReINEL Challenge (Le et al., 2020) to encourage the development of algorithms and systems for detecting unreliable information on Vietnamese SNS. In ReINTEL Challenge 2020, we need to determine whether a piece of information containing texts, images, and metadata is reliable or unreliable. The task is formalized as a binary classification problem and training data with annotated labels was provided by VLSP 2020 organizers.", "cite_spans": [ { "start": 190, "end": 207, "text": "(Le et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a novel multimodal ensemble model for identifying unreiable information on Vietnamese SNS. We use neural networks to obtain feature representations from different data types. Multimodal features are fused and put into a sigmoid layer for classification. Specifically, we use BERT model to obtain feature representations from texts, a multi-layer perceptron to encode metadata and text-based features, and a fine-tuned VGG-19 network to obtain feature representations from images. We combined two single models in order to improve the accuracy of fake news detection. Our proposed model obtained 0.9445 ROC AUC score on the private test of the challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Approaches to fake news detection can be roughly categorized into categorises: content-based methods, user-based methods and propagation-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In content-based methods, content-based features are extracted from textual aspects, such as from the contents of the posts or comments, and from visual aspects. Textual features can be automatically extracted by a deep neural network such as CNN (Kaliyar et al., 2020; Tian et al., 2020) . We can manually design textual features from word clues, patterns, or other linguistic features of texts such as their writing styles (Ghosh and Shah, 2018; Yang et al., 2018) . We can also analyze unreliable news based on the sentiment analysis . Furthermore, both textual and visual information can be used together to determine fake news by creating a multimodal model (Zhou et al., 2020; Khattar et al., 2019; Yang et al., 2018) .", "cite_spans": [ { "start": 243, "end": 269, "text": "CNN (Kaliyar et al., 2020;", "ref_id": null }, { "start": 270, "end": 288, "text": "Tian et al., 2020)", "ref_id": "BIBREF14" }, { "start": 425, "end": 447, "text": "(Ghosh and Shah, 2018;", "ref_id": "BIBREF5" }, { "start": 448, "end": 466, "text": "Yang et al., 2018)", "ref_id": "BIBREF17" }, { "start": 663, "end": 682, "text": "(Zhou et al., 2020;", "ref_id": "BIBREF18" }, { "start": 683, "end": 704, "text": "Khattar et al., 2019;", "ref_id": "BIBREF7" }, { "start": 705, "end": 723, "text": "Yang et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We can detect fake news by analysing social network information including user-based features and network-based features. User-based features are extracted from user profiles (Shu et al., 2019; Krishnan and Chen, 2018; Duan et al., 2020) . For example, number of followers, number of friends, and registration ages are useful features to determine the credibility of a user post (Castillo et al., 2011) . Network-based features can be extracted from the propagation of posts or tweets on graphs .", "cite_spans": [ { "start": 175, "end": 193, "text": "(Shu et al., 2019;", "ref_id": "BIBREF13" }, { "start": 194, "end": 218, "text": "Krishnan and Chen, 2018;", "ref_id": "BIBREF9" }, { "start": 219, "end": 237, "text": "Duan et al., 2020)", "ref_id": "BIBREF4" }, { "start": 379, "end": 402, "text": "(Castillo et al., 2011)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we describe methods which we have tried to generate results on the private test dataset of the challenge. We have tried three models in total and finally selected two best models for ensemble learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the pre-processing steps, we perform following steps before putting data into models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "\u2022 We found that there are some emojis written in text format such as \":)\", \";)\", \"=]]\", \":(\", \"=[\", etc. We converted those emojis into sentiment words \"happy\" and \"sad\" in Vietnamese respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "\u2022 We converted words and tokens that have been lengthened into short form. For example, \"Coooool\" into \"Cool\" or \"*****\" into \"**\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "\u2022 Since many posts are related to COVID-19 information, we changed different terms about COVID-19 into one term, such as \"covid\", \"ncov\" and \"convid\" into \"covid\", for consistency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "Since meta-data of news contains a lot of missing values, we performed imputation on four original metadata features. We used the mean values to fill missing values for three features including the number of likes, the number of shares, and the number of comments. For the timestamp features, we applied the MICE imputation method (Azur et al., 2011) .", "cite_spans": [ { "start": 331, "end": 350, "text": "(Azur et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "We found that there are some words written in incorrect forms, such as 's.\u00e1tha . i' instead of 's\u00e1t ha . i'. One may try to convert those words into standard forms, but as we will discuss in Section 4, keeping the incorrect form words actually improved the accuracy of models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "We converted the timestamp feature into 5 new features: day, month, year, hour and weekday. In addition to metadata features provided in the data, we extracted some statistic information from texts: number of hashtags, number of urls, number of characters, number of words, number of questionmarks and number of exclaim-marks. For each user, we counted the number of unreliable news and the number of reliable news that the user have made and the ratio between two numbers, to indicate the sharing behavior (Shu et al., 2019) . We also created a Boolean variable to indicate that a post contains images or not. In total, we got 17 features including metadata features. All the metadata-based features will be standardized by subtracting the mean and scaling to unit variance, except for the Boolean feature. Figure 1 shows the general model architecture of three models we have tried. In all models, we applied the same strategy for image-based features and meta-data based features. For metadata-based features, we passed it into a fully-connected layer layer with batch normalization. We found that there are posts having one or more images and there are posts having have no image. For posts containing images, we randomly chose one image as the input. For other posts, we created a black image (all pixels have zero values) as the input. We then finetuned VGG-19 model on the images of the training data. After that, we used the output prior the fullyconnected layer as image-based features. Instead of taking averages of all vectors of pixels, we applied the attention mechanism as shown in Figure 1b to obtain the final representation of images.", "cite_spans": [ { "start": 507, "end": 525, "text": "(Shu et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 808, "end": 816, "text": "Figure 1", "ref_id": null }, { "start": 1596, "end": 1605, "text": "Figure 1b", "ref_id": null } ], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "In the following sections, we describe three variants that we made from the general architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.2" }, { "text": "Model 1 In the first model (Figure 2a ), we obtained the embedding vector of a text using BERT model (Devlin et al., 2019) . After that, we used 1D-CNN (Kim, 2014) with filter sizes 2, 3, 4, and 5. By doing that, we can use more information from different sets of word vectors for prediction. We flattened and concatenated all the output from 1D-CNN and passed into a fully-connected layer with with a batch normalization layer. Finally, we took averages of features of texts, images and metadata and passed them into a sigmoid layer for classification.", "cite_spans": [ { "start": 101, "end": 122, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 152, "end": 163, "text": "(Kim, 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 27, "end": 37, "text": "(Figure 2a", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.2" }, { "text": "Model 2 In the second model (Figure 2b ), there are some changes in comparison with the first model. After passing the embedding vectors through various layers of 1D-CNN, we stacked those outputs vertically and passed into three additional 1D-CNN layers.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 38, "text": "(Figure 2b", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.2" }, { "text": "Model 3 In the third model (Figure 2b ), we just slightly changed the second model by adding a shortcut connections between input and the output of each 1D-CNN layer. ", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 37, "text": "(Figure 2b", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.2" }, { "text": "For the final model, we selected two best models among three above models and took averages of probabilities returned by the two models to obtain the final result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Model", "sec_num": null }, { "text": "In experiments, we used the same parameters as showed in Table 1 for all proposed models 1 . We reported ROC-AUC scores on the private test data.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "In the first experiment, we compared two ways of preprocessing texts: 1) converting words in incorrect forms into corrected forms; and 2) keeping the 1 Our code:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "https://github.com/dt024/ vlsp2020_toyoaimesoft incorrect forms of words. The text is put through PhoBERT (Nguyen and Nguyen, 2020) to get the embedded vectors. In this experiment, we did not apply the attention mechanism. Table 2 shows that keeping the original words obtained better ROC-AUC score.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "Next, we compared the effects of two different pre-trained BERT models for Vietnamese: PhoBERT and Bert4news 2 . Table 3 shows that Bert4news model is significantly better than PhoBERT model. Furthermore, when we added the proposed attention mechanism to get feature representations for images, we obtained 0.940217 AUC score. Table 4 shows results for three models which we have described in section 3. We got 0.939215 with model 1, 0.919242 with model 2, and 0.940217 with model 3. The final model is derived from model 1 and model 3 by calculating the average of results returned by model 1 and model 3. We obtained 0.944949 of ROC-AUC using that simple ensemble model.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 327, "end": 334, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "Since there may be more than one images in a post, we have tried to use one image as input or multiple images (4 images at most) as input. In preliminary experiments, we found that using only one image for each post obtained higher result in development set, so we decided to use one images in further experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We have showed that keeping words in incorrect forms in the text better than fixing it to the correct forms. A possible explanation might be that those texts may contain violent contents or extreme words and users use that forms in order to bypass the social media sites' filtering function. Since those words can partly reflect the sentiment of the text, the classifier may gain benefit from it. The reason is that unreliable contents tend to use more subjective or extreme words to convey a particular perspective .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We also showed that by using the proposed attention mechanism, the result improved significantly. This result indicates that images and texts are corelated. In our observation, images and texts of reliable news are often related while in many unreliable news, posters use images that do not relate to the content of the news for click-bait purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We found that convolution layers are useful and textual features can be well extracted by CNN layers. Conneau et al., 2017 has showed that a deep stack of local operations can help the model to learn the high-level hierarchical representation of a sentence and increasing the depth leads to the improvement in performance. Also, deeper CNN with residual connections can help to avoid overfitting and solves the vanishing gradient problem (Kaliyar et al., 2020).", "cite_spans": [ { "start": 102, "end": 122, "text": "Conneau et al., 2017", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We have presented a multimodal ensemble model for unreliable information identification on Vietnamese SNS. We combined two neural network models which fuse multimodal features from three data types including texts, images, and metadata. Experimental results confirmed the effectiveness of our methods in the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Summary", "sec_num": "6" }, { "text": "As future work, we plan to use auxiliary data to verify if a piece of information is unreliable or not. We believe that the natural way to make a judgement in fake news detection task is to compare a piece of information with different information sources to find out relevant evidences of fake news. We also want to see whether or not choosing one image randomly can affects the results and find solution to use more than one image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "Bert4News is available on: https://github.com/ bino282/bert4news", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multiple imputation by chained equations: What is it and how does it work? International journal of methods in psychiatric research", "authors": [ { "first": "Melissa", "middle": [], "last": "Azur", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "Constantine", "middle": [], "last": "Frangakis", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Leaf", "suffix": "" } ], "year": 2011, "venue": "", "volume": "20", "issue": "", "pages": "40--49", "other_ids": { "DOI": [ "10.1002/mpr.329" ] }, "num": null, "urls": [], "raw_text": "Melissa Azur, Elizabeth Stuart, Constantine Frangakis, and Philip Leaf. 2011. Multiple imputation by chained equations: What is it and how does it work? International journal of methods in psychiatric re- search, 20:40-9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Information credibility on twitter", "authors": [ { "first": "Carlos", "middle": [], "last": "Castillo", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Mendoza", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Poblete", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th International Conference on World Wide Web, WWW '11", "volume": "", "issue": "", "pages": "675--684", "other_ids": { "DOI": [ "10.1145/1963405.1963500" ] }, "num": null, "urls": [], "raw_text": "Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th International Conference on World Wide Web, WWW '11, page 675-684, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Very deep convolutional networks for text classification", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2017. Very deep convolutional net- works for text classification.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "RMIT at PAN-CLEF 2020: Profiling Fake News Spreaders on Twitter", "authors": [ { "first": "Xinhuan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Elham", "middle": [], "last": "Naghizade", "suffix": "" }, { "first": "Damiano", "middle": [], "last": "Spina", "suffix": "" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "CLEF 2020 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinhuan Duan, Elham Naghizade, Damiano Spina, and Xiuzhen Zhang. 2020. RMIT at PAN-CLEF 2020: Profiling Fake News Spreaders on Twitter. In CLEF 2020 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Towards automatic fake news classification", "authors": [ { "first": "Souvick", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Chirag", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Association for Information Science and Technology", "volume": "55", "issue": "", "pages": "805--807", "other_ids": {}, "num": null, "urls": [], "raw_text": "Souvick Ghosh and Chirag Shah. 2018. Towards au- tomatic fake news classification. Proceedings of the Association for Information Science and Technology, 55(1):805-807.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Fndnet -a deep convolutional neural network for fake news detection", "authors": [ { "first": "Rohit", "middle": [], "last": "Kumar Kaliyar", "suffix": "" }, { "first": "Anurag", "middle": [], "last": "Goswami", "suffix": "" }, { "first": "Pratik", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Soumendu", "middle": [], "last": "Sinha", "suffix": "" } ], "year": 2020, "venue": "Cogn. Syst. Res", "volume": "61", "issue": "C", "pages": "32--44", "other_ids": { "DOI": [ "10.1016/j.cogsys.2019.12.005" ] }, "num": null, "urls": [], "raw_text": "Rohit Kumar Kaliyar, Anurag Goswami, Pratik Narang, and Soumendu Sinha. 2020. Fndnet -a deep con- volutional neural network for fake news detection. Cogn. Syst. Res., 61(C):32-44.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mvae: Multimodal variational autoencoder for fake news detection", "authors": [ { "first": "Dhruv", "middle": [], "last": "Khattar", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Singh Goud", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference, WWW '19", "volume": "", "issue": "", "pages": "2915--2921", "other_ids": { "DOI": [ "10.1145/3308558.3313552" ] }, "num": null, "urls": [], "raw_text": "Dhruv Khattar, Jaipal Singh Goud, Manish Gupta, and Vasudeva Varma. 2019. Mvae: Multimodal vari- ational autoencoder for fake news detection. In The World Wide Web Conference, WWW '19, page 2915-2921, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Identifying tweets with fake news", "authors": [ { "first": "S", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "M", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Information Reuse and Integration (IRI)", "volume": "", "issue": "", "pages": "460--464", "other_ids": { "DOI": [ "10.1109/IRI.2018.00073" ] }, "num": null, "urls": [], "raw_text": "S. Krishnan and M. Chen. 2018. Identifying tweets with fake news. In 2018 IEEE International Con- ference on Information Reuse and Integration (IRI), pages 460-464.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Reintel: A multimodal data challenge for responsible information identification on social network sites", "authors": [ { "first": "Duc-Trong", "middle": [], "last": "Le", "suffix": "" }, { "first": "Xuan-Son", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Nhu-Dung", "middle": [], "last": "To", "suffix": "" }, { "first": "Huu-Quang", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Thuy-Trinh", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Linh", "middle": [], "last": "Le", "suffix": "" }, { "first": "Anh-Tuan", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Minh-Duc", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Nghia", "middle": [], "last": "Le", "suffix": "" }, { "first": "Huyen", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Hoang", "middle": [ "D" ], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duc-Trong Le, Xuan-Son Vu, Nhu-Dung To, Huu- Quang Nguyen, Thuy-Trinh Nguyen, Linh Le, Anh- Tuan Nguyen, Minh-Duc Hoang, Nghia Le, Huyen Nguyen, and Hoang D. Nguyen. 2020. Reintel: A multimodal data challenge for responsible informa- tion identification on social network sites.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Rumor detection on twitter with tree-structured recursive neural networks", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1980--1989", "other_ids": { "DOI": [ "10.18653/v1/P18-1184" ] }, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Ru- mor detection on twitter with tree-structured recur- sive neural networks. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1980- 1989, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "PhoBERT: Pre-trained language models for Vietnamese", "authors": [ { "first": "Anh", "middle": [ "Tuan" ], "last": "Dat Quoc Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1037--1042", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1037-1042.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The role of user profile for fake news detection", "authors": [ { "first": "Kai", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Suhang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Shu, Xinyi Zhou, Suhang Wang, Reza Zafarani, and Huan Liu. 2019. The role of user profile for fake news detection.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Early detection of rumours on twitter via stance transfer learning", "authors": [ { "first": "Lin", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "575--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin Tian, Xiuzhen Zhang, Yan Wang, and Huan Liu. 2020. Early detection of rumours on twitter via stance transfer learning. In Advances in Information Retrieval, pages 575-588, Cham. Springer Interna- tional Publishing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Five shades of untruth: Finer-grained classification of fake news", "authors": [ { "first": "L", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wang", "suffix": "" }, { "first": "G", "middle": [], "last": "De Melo", "suffix": "" }, { "first": "G", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2018, "venue": "IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining", "volume": "", "issue": "", "pages": "593--594", "other_ids": { "DOI": [ "10.1109/ASONAM.2018.8508256" ] }, "num": null, "urls": [], "raw_text": "L. Wang, Y. Wang, G. de Melo, and G. Weikum. 2018. Five shades of untruth: Finer-grained classification of fake news. In 2018 IEEE/ACM International Con- ference on Advances in Social Networks Analysis and Mining (ASONAM), pages 593-594.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Eann: Event adversarial neural networks for multi-modal fake news detection", "authors": [ { "first": "Yaqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fenglong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "G", "middle": [], "last": "Xun", "suffix": "" }, { "first": "Kishlay", "middle": [], "last": "Jha", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaqing Wang, Fenglong Ma, Z. Jin, Ye Yuan, G. Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Ti-cnn: Convolutional neural networks for fake news detection", "authors": [ { "first": "Yang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qingcai", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Yang, Lei Zheng, Jiawei Zhang, Qingcai Cui, Zhoujun Li, and Philip S. Yu. 2018. Ti-cnn: Con- volutional neural networks for fake news detection.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Safe: Similarity-aware multi-modal fake news detection", "authors": [ { "first": "Xinyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jindi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyi Zhou, Jindi Wu, and Reza Zafarani. 2020. Safe: Similarity-aware multi-modal fake news detection.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Network-based fake news detection: A pattern-driven approach", "authors": [ { "first": "Xinyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Reza", "middle": [], "last": "Zafarani", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyi Zhou and Reza Zafarani. 2019. Network-based fake news detection: A pattern-driven approach.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "text": "Figure 1: General Model Architecture", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Text-based features extractor for each model.", "type_str": "figure" }, "TABREF1": { "html": null, "num": null, "content": "
ExpROC-AUC
Convert words to correct forms0.918298
Keep words in incorrect forms0.920608
", "text": "Parameters Setting", "type_str": "table" }, "TABREF2": { "html": null, "num": null, "content": "
ExpROC-AUC
PhoBERT0.920608
Bert4news0.927694
Bert4news + attention0.940217
", "text": "Two ways of preprocessing texts.", "type_str": "table" }, "TABREF3": { "html": null, "num": null, "content": "
: Comparison of different pre-trained models
and using attention mechanism
ExpROC-AUC
Model 10.939215
Model 20.919242
Model 30.940217
Ensemble0.944949
", "text": "", "type_str": "table" }, "TABREF4": { "html": null, "num": null, "content": "", "text": "Final results", "type_str": "table" } } } }