ACL-OCL / Base_JSON /prefixV /json /vlsp /2020.vlsp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:13:59.452195Z"
},
"title": "ReINTEL Challenge 2020: A Comparative Study of Hybrid Deep Neural Network for Reliable Intelligence Identification on Vietnamese SNSs",
"authors": [
{
"first": "Viet",
"middle": [],
"last": "Hoang",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": "trinh.viet.hoang@sun-asterisk.com"
},
{
"first": "Tien",
"middle": [],
"last": "Trinh",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "Tam",
"middle": [],
"last": "Bui",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Minh",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "Quang",
"middle": [],
"last": "Huy",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "Huu",
"middle": [],
"last": "Dao",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "Ngoc",
"middle": [
"N"
],
"last": "Pham",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Tran",
"suffix": "",
"affiliation": {
"laboratory": "AI Research Team, R&D Lab",
"institution": "Sun* Inc",
"location": {}
},
"email": ""
},
{
"first": "Ta",
"middle": [
"Minh"
],
"last": "Thanh",
"suffix": "",
"affiliation": {
"laboratory": "Le Quy Don Technical University",
"institution": "",
"location": {
"settlement": "Ha Noi",
"country": "Vietnam"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The overwhelming abundance of data has created a misinformation crisis. Unverified sensationalism that is designed to grab the readers' short attention span, when crafted with malice, has caused irreparable damage to our society's structure. As a result, determining the reliability of an article has become a crucial task. After various ablation studies, we propose a multi-input model that can effectively leverage both tabular metadata and post content for the task. Applying state-of-the-art finetuning techniques for the pretrained component and training strategies for our complete model, we have achieved a 0.9462 ROC-score on the VLSP private test set.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The overwhelming abundance of data has created a misinformation crisis. Unverified sensationalism that is designed to grab the readers' short attention span, when crafted with malice, has caused irreparable damage to our society's structure. As a result, determining the reliability of an article has become a crucial task. After various ablation studies, we propose a multi-input model that can effectively leverage both tabular metadata and post content for the task. Applying state-of-the-art finetuning techniques for the pretrained component and training strategies for our complete model, we have achieved a 0.9462 ROC-score on the VLSP private test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The fast growth of social media and misinformed contents have posed an incremental challenge of exposing untrustworthy news to billions of their global users, including 65 million Vietnamese users (Social, 2020) . Consequently, the spread of mistrust information on social cites has placed real damages on government, policymakers, organizations, and citizens of many countries (Cheng and Chen, 2020; Pham et al., 2020) , resulting in an urge for fast and large-scale fact-checking online contents. With the enormous amount of news and information on the internet daily, this is impossible to be efficiently done only by human efforts, putting a quest to create a trustworthy system to perform the task automatically.",
"cite_spans": [
{
"start": 197,
"end": 211,
"text": "(Social, 2020)",
"ref_id": null
},
{
"start": 378,
"end": 400,
"text": "(Cheng and Chen, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 401,
"end": 419,
"text": "Pham et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1.1"
},
{
"text": "Reliable Intelligence Identification on Vietnamese SNSs (ReINTEL) is the task of reliable or unreliable social-network-sites (SNSs) identification. The main difficulties of these tasks, including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1.1"
},
{
"text": "\u2022 The given data (contents of social sites) is unstructured, containing mostly texts combined with metadata (including: images, dates, numbers, username, id, etc) . The metainformation is partially missing and incorrect, making the usage of those data more challenging.",
"cite_spans": [
{
"start": 108,
"end": 162,
"text": "(including: images, dates, numbers, username, id, etc)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1.1"
},
{
"text": "\u2022 The problem is multi-modal learning, which 'involves relating information from multiple sources' (Sachowski, 2016) , resulting in the search for a proper combination of features from those sources to learn a unified model with high performance.",
"cite_spans": [
{
"start": 99,
"end": 116,
"text": "(Sachowski, 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1.1"
},
{
"text": "In this paper, we propose our methods to resolve these above-mentioned problems. With thorough experiments, we determined to answers two main questions: Should we incorporate multi-source data? Furthermore, how to combine them in terms of training strategies? Our contributions are as followed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions",
"sec_num": "1.2"
},
{
"text": "\u2022 We provide a reliable method of data cleansing, making metadata ready for prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions",
"sec_num": "1.2"
},
{
"text": "\u2022 More importantly, we are the first who construct a comprehensive comparative study to discover the effectiveness of models when incorporating multi-source data with different training strategies. Our experiment's results reveal that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions",
"sec_num": "1.2"
},
{
"text": "-Models using text or meta-features alone has a crucial gap in performance, indicating that texture information is significantly more predictive than metadata. -Models utilize multi-source data with different training strategies results in a wide range of performance. This finding implies that combining data in training has a significant impact on the overall performance. -Combining data from multi-sources with particular training plans leads to our best models. Additionally, the model trained with metadata alone performs significantly better than a random guess, shedding light on the meta data's informativeness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions",
"sec_num": "1.2"
},
{
"text": "\u2022 We apply state-of-the-art transfer learning methods for textual feature extractions and neural network (in comparison with other traditional machine learning methods) for tabular-data feature representation, achieving the competitive performance of 0.9418 ROCscore on the public test set (ranked 2nd) and 0.9462 ROC-score (ranked 3th) on the private test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions",
"sec_num": "1.2"
},
{
"text": "In the following sections, we briefly review some related works involve with our methods. Next, in section 3, we illustrate our method in detail. Our experiments are described in Section 4, including dataset description, data preprocessing methods, and our model configurations, whereas Section 5 indicates all of our experimental results. Finally, section 6 is the conclusion for our proposed framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Roadmap",
"sec_num": "1.3"
},
{
"text": "Recent works on learning universal representation for text, namely Elmo (Peters et al., 2018) , GPT (Radford, 2018) , BERT (Devlin et al., 2018) have brought remarkable improvements for wide, diverse NLP downstream tasks: Text Classification, Question Answering and Named Entity Recognition. In contrast to traditional methods such as Word2vec (Mikolov et al., 2013) or Glove (Pennington et al., 2014) which learns context-independent word embeddings, universal language models were trained on a massively large amount of unlabeled data with different pretext tasks, including causal language modeling and masked language modeling, to learn a deep contextual representation of words given its context.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 100,
"end": 115,
"text": "(Radford, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 123,
"end": 144,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 344,
"end": 366,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 376,
"end": 401,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Representation For Text",
"sec_num": "2.1"
},
{
"text": "Studies of fake news identification on social network sites have gained significant attention recently. Most of them utilize data from multiple sources. For example, CSI (Ruchansky et al., 2017) , a framework with several modules based on Long Short-Term Memory (Hochreiter and Schmidhuber, 1997 ) and a fully connected layer that utilizes the article's contents, the users' responses and behaviors of source users who promote it. Another instance is dEFEND (Shu et al., 2019) , which exploits both news contents and user comments with a deep hierarchical co-attention network to learn a rich representation for fake news detection. From a slightly different point of view, TriFN (Shu et al., 2017 ) models a tri-relationship between users, publishers, and new contents by several embedding methods and experiments promising results. Although utilizing multi-source data, existing research appears to lack a comprehensive study on the effectiveness of input-combination strategies.",
"cite_spans": [
{
"start": 170,
"end": 194,
"text": "(Ruchansky et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 262,
"end": 295,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF7"
},
{
"start": 458,
"end": 476,
"text": "(Shu et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 680,
"end": 697,
"text": "(Shu et al., 2017",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fake News Detection on SNSs",
"sec_num": "2.2"
},
{
"text": "Inspired by BERT's textual learning methods, PhoBERT (Nguyen and Nguyen, 2020) was proposed to extend the successes of deep pre-trained language models to Vietnamese. Its pretraining approach is based on RoBERTa training strategies to optimize BERT training procedure. Additionally, PhoBERT also consists of two different settings, PhoBERT Base, which uses 12 Transformer Encoder layers and 24 layers with PhoBERT Large. It improves many Vietnamese NLP downstream tasks. For instance, Pham (Pham et al., 2020) introduced novel techniques to adapt general-purpose PhoBERT to a specific text classification task and archives state of the art on Vietnamese Hate Speech Detection (HSD) campaign.",
"cite_spans": [
{
"start": 490,
"end": 509,
"text": "(Pham et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vietnamese Natural Language Processing",
"sec_num": "2.3"
},
{
"text": "In this paper, we use the dataset provided by VLSP organizers for ReINTEL task (Le et al., 2020) , composed of contents from Vietnamese social network sites (SNSs), e.g., Facebook, Zalo, or Lotus (Social, 2020). There are approximately 5,000 labeled training examples, while the test set consists of 2,000 unlabeled examples. Each example is provided with information about the news's textual content, timestamp, number of likes, shares, comments, and attached pictures. Table 1 indicates the detailed statistic of the dataset, the data distribution of reliable and unreliable news was heavily imbalanced and skewed toward trustworthy contents.",
"cite_spans": [
{
"start": 79,
"end": 96,
"text": "(Le et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 471,
"end": 478,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Fake news can be studied with respect to four perspectives: (i) knowledge-based (focusing on the false knowledge in fake news); (ii) style-based (concerned with how fake news is written); (iii) propagation-based (focused on how fake news spreads); and (iii) credibility-based (investigating the credibility of its creators and spreaders) (Zhou and Zafarani, 2018) . In this task, with the ReIN-TEL dataset, we focused on knowledge-based and credibility-based. Specifically, we performed the following preprocessing to extract the necessary information.",
"cite_spans": [
{
"start": 338,
"end": 363,
"text": "(Zhou and Zafarani, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "\u2022 Deleted incorrect data rows: While mining data, there are few incorrect rows due to the process of collecting and storing data. We decided to delete these rows from the data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "\u2022 Filled missing value: To deal with missing values, we fill them with different strategies: numbers with 0, timestamps with the min timestamp and post messages with empty string",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "\u2022 Extracted date time features from timestamp values: For each timestamp value, we decoded these to date time values to enrich feature: minutes, hours, days, months, years, weekdays, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "\u2022 Created user_score feature: For user id, we created a user reputation score metric based on previous posts in dataset. This score is used to evaluate the user's future posts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "\u2022 Created image_count feature: With images of each post, we compiled several information, including: number of images and image's aspect ratio",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "\u2022 Preprocessed post_message feature: We perform post messages preprocessing more carefully than the rest. The processing stages are listed below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "-Filled missing value with empty string -Standardized Vietnamese punctuation -Removed HTML tags -Replaced email, links, phone, numbers, emoji, date time with new corresponding token",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": "3.2"
},
{
"text": "Metadata for the ReINTEL dataset is composed of all input features except post message (text data). We tried numerous machine learning algorithms to learn a classifier using only metadata, ranging from traditional methods: Logistic Regression, Linear Discriminant Analysis, K Nearest Neighbor, Decision Tree, Gaussian Naive Bayes, Support Vector Machine, Adaptive Boosting, Gradient Boosting, Random Forest (Hastie et al., 2001) , and Extra Trees (Geurts et al., 2006) to a deep learning method: Multi-Layer Perceptron (Hastie et al., 2001) We then proceeded to select a handful of model with high performances and complexities to serve as a base model for stacking (Wolpert, 1992) . Meanwhile, for the meta-model used in stacking, we chose Logistic Regression. We also did the same for blending ensemble (Sill et al., 2009) .",
"cite_spans": [
{
"start": 407,
"end": 428,
"text": "(Hastie et al., 2001)",
"ref_id": "BIBREF5"
},
{
"start": 447,
"end": 468,
"text": "(Geurts et al., 2006)",
"ref_id": "BIBREF3"
},
{
"start": 519,
"end": 540,
"text": "(Hastie et al., 2001)",
"ref_id": "BIBREF5"
},
{
"start": 666,
"end": 681,
"text": "(Wolpert, 1992)",
"ref_id": "BIBREF26"
},
{
"start": 805,
"end": 824,
"text": "(Sill et al., 2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model for Tabular Data",
"sec_num": "3.3"
},
{
"text": "BERT's layers capture a rich hierarchy of linguistic information, with surface features at the bottom, general syntactic knowledge in the middle, and specific semantic information at the top layer (Jawahar et al., 2019) . Therefore, in order to better benefit for our downstream task, we incorporate as much as possible different kinds of information from our model backbone PhoBERT by concatenating [CLS] hidden states from each of 12 blocks, followed by a straightforward custom head, which is a multilayer perceptron with Dropout (Srivastava et al., 2014) . The architecture of the model is shown in the Figure 1 .",
"cite_spans": [
{
"start": 197,
"end": 219,
"text": "(Jawahar et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 533,
"end": 558,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 607,
"end": 615,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Deep learning-based Content Classification",
"sec_num": "3.4"
},
{
"text": "Our experiments (details are in the below section) indicates that meta data is informative predictors for reliable and unreliable news classification. Therefore, we decided to combine both text and meta data to resolve the task. The structure of our multi-input model is described (in Figure 2) as followed: output features of Multi-Layer Perceptron and RoBERTa models, after being concatenated or added together, were simply passed through a custom head classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 294,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Deep Multi-input Model",
"sec_num": "3.5"
},
{
"text": "We divide the dataset into a training set and a validation set with 10-fold cross validation method. Each fold, we use AdamW (Kingma and Ba, 2014) for optimization with a learning rate of 10 \u22125 and a batch size of 32. Warm-up learning was applied, with the chosen maximum learning rate was 2 \u00d7 10 \u22125 . Except for all bias parameters and coefficients of LayerNorm layers (Ba et al., 2016) , the rest of the model's parameters were regularized with weight decay to reduce overfitting. We used a regularization coefficient of 0.01. The number of training epochs was 20.",
"cite_spans": [
{
"start": 137,
"end": 146,
"text": "Ba, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 370,
"end": 387,
"text": "(Ba et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.1"
},
{
"text": "Instead of using cross-entropy loss, we implemented a label smoothing cross-entropy loss function, a combination of cross-entropy loss and label smoothing (M\u00fcller et al., 2019) . The smoothing rate is set to 0.15.",
"cite_spans": [
{
"start": 155,
"end": 176,
"text": "(M\u00fcller et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.1"
},
{
"text": "We applied state-of-the-art fine-tuning techniques including: gradual unfreezing, discriminate learning rate, warm-up learning rate schedule (Pham et al., 2020) to perform effective task adaptation (Gururangan et al., 2020).",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Pham et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning technique",
"sec_num": "4.2"
},
{
"text": "We apply four training strategies to study the effects of combining text and mate data on our above-mentioned multi-data model's performance. Notice here that we used the pre-trained weights of RoBERTa as the initialization for the textualfeature-extraction-model's backbone in all strategies. We refer to the textual and meta feature extraction parts of the multi-source model are referred as text and meta submodel for short. Our training policies are described as followed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Strategies",
"sec_num": "4.3"
},
{
"text": "\u2022 Strategy 1 (S1): The parameters of both the text submodel's head and the meta submodel are initialized randomly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Strategies",
"sec_num": "4.3"
},
{
"text": "\u2022 Strategy 2 (S2): The meta submodel will be trained for the task first. Its feature extraction part (all layers except the output one used for classification) is used to combine with the text submodel. The parameters of the text submodel's head are initialized randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Strategies",
"sec_num": "4.3"
},
{
"text": "\u2022 Strategy 3 (S3): Meta submodel is un-trained when incorporates with the text submodel, which is already fine-tuned with the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Strategies",
"sec_num": "4.3"
},
{
"text": "\u2022 Strategy 4 (S4): Both the two submodels are trained/fine-tuned with the classification task before being combined for further training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Strategies",
"sec_num": "4.3"
},
{
"text": "Our experiments are conducted on a computer with Intel Core i7 9700K Turbo 4.9GHz, 32GB of RAM, GPU GeForce GTX 2080Ti, and 1TB SSD hard disk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System configuration",
"sec_num": "4.4"
},
{
"text": "For this work, we used the Area Under the Receiver Operating Characteristic Curve (ROC-AUC), a common evaluation metrics for classification tasks. The Receiver Operating Characteristic (ROC) curve shows how well a model classify samples by plotting the true positive rate against the false positive rate at various thresholds. To turn the graph into a numerical metrics, the Area Under Curve (AUC) is then evaluated. A maximum value of 1.0 indicates that the model predicts correctly for all thresholds, and a minimum of 0.0 implies the model gets everything wrong all the time. The formula for ROC-AUC is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "ROC-AUC = +\u221e 0 +\u221e \u2212\u221e f 1 (u)f 0 (u \u2212 v)dudv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "(1) where f 1 and f 0 are the density functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "Our results are shown in Table 2 3 4 5 Table 2 compares the effectiveness of traditional machine learning algorithm on metadata. The performance ranges from a ROC-AUC score of 0.5450 with a simple Logistic Regression, to 0.7338 through employing Gradient Boosting across various models. Despite achieving results not as competitive as which of Gradient Boosting, the Multi-Layer Perceptron model was chosen due to its differentiability, which enabled joint training with the textual model (details in Section 3.5). Most of the aforementioned model's performances are significantly better random guessing, indicating that metadata is an informative predictor for the news classification task. Table 3 shows the ROC-AUC scores as we tried incorporating different embeddings from different RoBERTa blocks. Specifically, as illustrated in Figure 1 , we selected a subset of all embeddings RoBERTa generated, which are then concatenated together and passed through a classifier. Amongst our trials, an ensemble of various combinations across all embeddings achieved the highest AUC-ROC score of 0.9418. Table 4 highlights one of the major discoveries of our work. It presents our best results for models using only meta-or text data to classify SNS. The performance gap between the two models is significant (more than 0.20 in ROC-AUC score), pointing out that textual features are more predictive than metadata. Besides, using only meta-features is considerably more accurate than random guess (0.7338 ROC-AUC score), indicating that its information can be employed to train a better model. Table 5 sheds lights on how to effectively combined multi-source data. S1, S2, S3, and S4 in the table refer to the previously-mentioned strategy 1, strategy 2, strategy 3, and strategy 4. S1 and S2 result in the least performance among the four, less than almost 0.05 and 0.02 ROC-AUC score than our second best strategies, S4. Additionally, compared to training with only textual features even better than S1 and inconsiderably worse than S2. This result indicates that fine-tuning text submodel with the task before combining with meta submodel is crucial to achieving high performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 47,
"text": "Table 2 3 4 5 Table 2",
"ref_id": "TABREF1"
},
{
"start": 693,
"end": 700,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 836,
"end": 844,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1099,
"end": 1106,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1588,
"end": 1595,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Our results",
"sec_num": "5.2"
},
{
"text": "The worsen results of S1 compared to S2 and S3 compared to S4 points out that pretraining meta submodel before the combination of 2 submodels enhances the overall training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our results",
"sec_num": "5.2"
},
{
"text": "This paper has constructed a comprehensive comparative study to discover the effectiveness of models with multiple inputs and mixed data. We have explored and proposed different training strategies to train the hybrid deep neural architecture for reliable intelligence identification task. By conducting experiments using PhoBERT, we have demonstrated that combining mixed data with particular training plans leads to our best results. With our proposed methods, we have achieved a competitive performance of 94.18% ROC-score on the public test and 94.62% ROC-score on the private test set in VLSP's ReINTEL 2020 campaign.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Layer normalization",
"authors": [
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Ba",
"suffix": ""
},
{
"first": "Jamie",
"middle": [
"Ryan"
],
"last": "Kiros",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hin- ton. 2016. Layer normalization.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The influence of presumed fake news influence: Examining public support for corporate corrective response, media literacy interventions, and governmental regulation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zifei Fay",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "23",
"issue": "",
"pages": "705--729",
"other_ids": {
"DOI": [
"10.1080/15205436.2020.1750656"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Cheng and Zifei Fay Chen. 2020. The influence of presumed fake news influence: Examining pub- lic support for corporate corrective response, media literacy interventions, and governmental regulation. Mass Communication and Society, 23(5):705-729.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extremely randomized trees",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Geurts",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Ernst",
"suffix": ""
},
{
"first": "Louis",
"middle": [],
"last": "Wehenkel",
"suffix": ""
}
],
"year": 2006,
"venue": "Mach. Learn",
"volume": "63",
"issue": "1",
"pages": "3--42",
"other_ids": {
"DOI": [
"10.1007/s10994-006-6226-1"
]
},
"num": null,
"urls": [],
"raw_text": "Pierre Geurts, Damien Ernst, and Louis Wehenkel. 2006. Extremely randomized trees. Mach. Learn., 63(1):3-42.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "2020. Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Elements of Statistical Learning",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Jerome",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Hastie, Robert Tibshirani, and Jerome Fried- man. 2001. The Elements of Statistical Learning.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "What does BERT learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3651--3657",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1356"
]
},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Reintel: A multimodal data challenge for responsible information identification on social network sites",
"authors": [
{
"first": "Duc-Trong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Xuan-Son",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Nhu-Dung",
"middle": [],
"last": "To",
"suffix": ""
},
{
"first": "Huu-Quang",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thuy-Trinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Linh",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Anh-Tuan",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Minh-Duc",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Nghia",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Huyen",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Hoang",
"middle": [
"D"
],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duc-Trong Le, Xuan-Son Vu, Nhu-Dung To, Huu- Quang Nguyen, Thuy-Trinh Nguyen, Linh Le, Anh- Tuan Nguyen, Minh-Duc Hoang, Nghia Le, Huyen Nguyen, and Hoang D. Nguyen. 2020. Reintel: A multimodal data challenge for responsible informa- tion identification on social network sites.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "When does label smoothing help?",
"authors": [
{
"first": "Rafael",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Kornblith",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael M\u00fcller, Simon Kornblith, and Geoffrey Hinton. 2019. When does label smoothing help?",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Phobert: Pre-trained language models for vietnamese",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. Phobert: Pre-trained language models for viet- namese.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep contextualized word representations. CoRR",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. CoRR, abs/1802.05365.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "From universal language model to downstream task: Improving roberta-based vietnamese hate speech detection",
"authors": [
{
"first": "Quang",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Viet Anh",
"suffix": ""
},
{
"first": "Linh",
"middle": [],
"last": "Doan",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Ta",
"middle": [],
"last": "Thanh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quang Pham, Nguyen Viet Anh, Linh Doan, Ngoc Tran, and Ta Thanh. 2020. From universal language model to downstream task: Improving roberta-based vietnamese hate speech detection.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "A",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Radford. 2018. Improving language understanding by generative pre-training.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "CSI: A hybrid deep model for fake news",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Ruchansky",
"suffix": ""
},
{
"first": "Sungyong",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. CSI: A hybrid deep model for fake news. CoRR, abs/1703.06959.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Identify Potential Data Sources",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Sachowski",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "63--72",
"other_ids": {
"DOI": [
"10.1016/B978-0-12-804454-4.00006-X"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Sachowski. 2016. Identify Potential Data Sources, pages 63-72.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Defend: Explainable fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD '19",
"volume": "",
"issue": "",
"pages": "395--405",
"other_ids": {
"DOI": [
"10.1145/3292500.3330935"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. Defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD '19, page 395-405, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Exploiting tri-relationship for fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Suhang Wang, and Huan Liu. 2017. Exploit- ing tri-relationship for fake news detection. CoRR, abs/1712.07709.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Feature-weighted linear stacking",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sill",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Tak\u00e1cs",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mackey",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2009,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Sill, G. Tak\u00e1cs, L. Mackey, and D. Lin. 2009. Feature-weighted linear stacking. ArXiv, abs/0911.0460.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "We Are Social. 2020. Digital 2020 -global digital overview",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "We Are Social. 2020. Digital 2020 -global digital overview.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Mach. Learn. Res",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural net- works from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Stacked generalization. Neural Networks",
"authors": [
{
"first": "H",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wolpert",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "5",
"issue": "",
"pages": "241--259",
"other_ids": {
"DOI": [
"10.1016/S0893-6080(05)80023-1"
]
},
"num": null,
"urls": [],
"raw_text": "David H. Wolpert. 1992. Stacked generalization. Neu- ral Networks, 5(2):241 -259.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Fake news: A survey of research, detection methods, and opportunities",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Zhou and Reza Zafarani. 2018. Fake news: A survey of research, detection methods, and opportu- nities. CoRR, abs/1812.00315.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The architecture model for content classification using RoBERTa pre-trained model."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "An illustration of our proposed deep multiinput architecture."
},
"TABREF0": {
"html": null,
"num": null,
"text": "Statistics of the datasets.",
"content": "<table><tr><td/><td>Dataset</td></tr><tr><td>Total News</td><td>5172</td></tr><tr><td>Users</td><td>3706</td></tr><tr><td>Unique News</td><td>5087</td></tr><tr><td>News have images</td><td>1287</td></tr><tr><td>Reliable News</td><td>4238</td></tr><tr><td>Unreliable News</td><td>934</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"text": "Performance of models using only meta data.",
"content": "<table><tr><td>Method</td><td>ROC-AUC</td></tr><tr><td>Logistic Regression</td><td>0.545037</td></tr><tr><td colspan=\"2\">Linear Discriminant Analysis 0.545037</td></tr><tr><td>K Nearest Neighbors</td><td>0.633251</td></tr><tr><td>Decision Tree</td><td>0.657217</td></tr><tr><td>Gaussian Naive Bayes</td><td>0.588978</td></tr><tr><td>Support Vector Machine</td><td>0.599256</td></tr><tr><td>Adaptive Boosting</td><td>0.673511</td></tr><tr><td>Gradient Boosting</td><td>0.733850</td></tr><tr><td>Random Forest</td><td>0.727192</td></tr><tr><td>Extra Tree</td><td>0.651323</td></tr><tr><td>Multi-Layer Perceptron</td><td>0.604653</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"2\">: ROC-AUC score on public test of combining</td></tr><tr><td colspan=\"2\">feature from blocks. Input model is the text content of</td></tr><tr><td>the news.</td><td/></tr><tr><td>Blocks</td><td>ROC-AUC</td></tr><tr><td>Block 1-6</td><td>0.913251</td></tr><tr><td>Block 6-12</td><td>0.937330</td></tr><tr><td>Block 9-12</td><td>0.921147</td></tr><tr><td>Block 1-12</td><td>0.939915</td></tr><tr><td colspan=\"2\">Block 1-12 (Ensemble) 0.941811</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Performance of models using only either text or meta data.",
"content": "<table><tr><td>Blocks</td><td>ROC-AUC</td></tr><tr><td colspan=\"2\">Only meta data 0.7338</td></tr><tr><td>Only text data</td><td>0.9628</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"num": null,
"text": "Performances of multi-data model with different training strategies.",
"content": "<table><tr><td>Blocks</td><td>ROC-AUC</td></tr><tr><td colspan=\"2\">Strategy 1 (S1) 0.9058</td></tr><tr><td colspan=\"2\">Strategy 2 (S2) 0.9399</td></tr><tr><td colspan=\"2\">Strategy 3 (S3) 0.9552</td></tr><tr><td colspan=\"2\">Strategy 4 (S4) 0.9628</td></tr></table>",
"type_str": "table"
}
}
}
}