ACL-OCL / Base_JSON /prefixV /json /vlsp /2020.vlsp-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
93.7 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:04.606599Z"
},
"title": "ReINTEL: A Multimodal Data Challenge for Responsible Information Identification on Social Network Sites",
"authors": [
{
"first": "Duc-Trong",
"middle": [],
"last": "Le",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Engineering and Technology",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Xuan-Son",
"middle": [],
"last": "Vu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ume\u00e5 University",
"location": {
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Nhu-Dung",
"middle": [],
"last": "To",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"country": "Australia"
}
},
"email": ""
},
{
"first": "Huu-Quang",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "ReML.AI -Reliable Machine Learning Lab, International",
"institution": "",
"location": {}
},
"email": "harry.nguyen@glasgow.ac.uk"
},
{
"first": "Thuy-Trinh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Linh",
"middle": [],
"last": "Le",
"suffix": "",
"affiliation": {
"laboratory": "ReML.AI -Reliable Machine Learning Lab, International",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Anh-Tuan",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "ReML.AI -Reliable Machine Learning Lab, International",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Minh-Duc",
"middle": [],
"last": "Hoang",
"suffix": "",
"affiliation": {
"laboratory": "ReML.AI -Reliable Machine Learning Lab, International",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Nghia",
"middle": [],
"last": "Le",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Huyen",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "ReML.AI -Reliable Machine Learning Lab, International",
"institution": "",
"location": {}
},
"email": "huyenntm@hus.edu.vn"
},
{
"first": "Hoang",
"middle": [
"D"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "ReML.AI -Reliable Machine Learning Lab, International",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports on the ReINTEL Shared Task for Responsible Information Identification on social network sites, which is hosted at the seventh annual workshop on Vietnamese Language and Speech Processing (VLSP 2020). Given a piece of news with respective textual, visual content and metadata, participants are required to classify whether the news is 'reliable' or 'unreliable'. In order to generate a fair benchmark, we introduce a novel human-annotated dataset of over 10,000 news collected from a social network in Vietnam. All models will be evaluated in terms of AUC-ROC score, a typical evaluation metric for classification. The competition was run on the Codalab platform. Within two months, the challenge has attracted over 60 participants and recorded nearly 1,000 submission entries.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports on the ReINTEL Shared Task for Responsible Information Identification on social network sites, which is hosted at the seventh annual workshop on Vietnamese Language and Speech Processing (VLSP 2020). Given a piece of news with respective textual, visual content and metadata, participants are required to classify whether the news is 'reliable' or 'unreliable'. In order to generate a fair benchmark, we introduce a novel human-annotated dataset of over 10,000 news collected from a social network in Vietnam. All models will be evaluated in terms of AUC-ROC score, a typical evaluation metric for classification. The competition was run on the Codalab platform. Within two months, the challenge has attracted over 60 participants and recorded nearly 1,000 submission entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This challenge aims at identifying the reliability of information shared on social network sites (SNSs). With the blazing-fast spurt of SNSs (e.g. Facebook, Zalo and Lotus), there are approximately 65 million Vietnamese users on board with the annual growth of 2.7 million in the recent year, as reported by the Digital 2020 1 . SNSs have become widely accessible for users to not only connect friends but also freely create and share diverse content (Shu et al., 2017; Zhou et al., 2019) . A number of users, however, has exploited these social platforms to distribute fake news and unreliable information to fulfill their personal or political purposes (e.g. US election 2016 (Allcott and Gentzkow, 2017) ). It is not easy for other ordinary users to realize the unreliability, hence, they keep spreading the fake content to their friends. The problem becomes more seriously once the unreliable post becomes popular and gains belief among the community. Therefore, it raises an urgent need for detecting whether a piece of news on SNSs is reliable or not. This task has gained significant attention recently (Ruchansky et al., 2017; Shu et al., 2019a,b; Yang et al., 2019) .",
"cite_spans": [
{
"start": 451,
"end": 469,
"text": "(Shu et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 470,
"end": 488,
"text": "Zhou et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 678,
"end": 706,
"text": "(Allcott and Gentzkow, 2017)",
"ref_id": "BIBREF0"
},
{
"start": 1110,
"end": 1134,
"text": "(Ruchansky et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 1135,
"end": 1155,
"text": "Shu et al., 2019a,b;",
"ref_id": null
},
{
"start": 1156,
"end": 1174,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The shared task focuses on the responsible (i.e. reliable) information identification on Vietnamese SNSs, referred to as ReINTEL. It is a part of the 7th annual workshop on Vietnamese Language and Speech Processing, VLSP 2020 2 for short. As a binary classification task, participants are required to propose models to determine the reliability of SNS posts based on their content, image and metadata information (e.g. number of likes, shares, and comments). The shared task consists of three phases namely Warm up, Public Test, Private Test, which is hosted on Codalab from October 21st, 2020 to November 30th, 2020. In summary, there are around 1000 submissions created by 8 teams and over 60 participants during the challenge period. 3. Is the source of the news reliable (e.g. from official channels)? 4. Is the language appropriate or provocative?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As our first contribution, this shared task provides an evaluation framework for the reliable information detection task, where participants could leverage and compare their innovative models on the same dataset. Their knowledge contribution may help improve safety on online social platforms. Another valuable contribution is the introduction of a novel dataset for the reliable information detection task. The dataset is built based on a fair human annotation of over 10,000 news from SNSs in Vietnam. We hope this dataset will be a useful benchmark for further research. In this shared task, AUC-ROC is utilized as the primary evaluation metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1: Data Annotation Tool",
"sec_num": null
},
{
"text": "The remainder of the paper is organized as follows. The next section describes the data collection and annotation methodologies. Subsequently, the shared task description and evaluation are summarized in Section 3. In Section 4, we discusses the potentials of language and vision transfer learning for the detection task. Section 5 describes the competition, approaches and respective results. Finally, Section 6 concludes the paper by suggesting potential applications for future studies and challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1: Data Annotation Tool",
"sec_num": null
},
{
"text": "We collect the data for two months from August to October 2020. There are two main sources of the data: SNSs and Vietnamese newspapers. As for the former source, public social media posts are retrieved from news groups and key opinion leaders (KOLs). Many fake news, however, has been flagged and removed from the social networking sites since the enforcement of Vietnamese cybersecurity law in 2019 (Son, 2018) . Therefore, to include the deleted fake news, we gather newspaper articles reporting these posts and recreate their content.",
"cite_spans": [
{
"start": 400,
"end": 411,
"text": "(Son, 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "All the collected data were originally posted in the period of March -June 2020. During this time, Vietnam was facing a second wave of Covid-19 with a drastic increase from 20 to 355 cases (WHO, 2020). The spread of Covid-19 results in an 'infodemic' in which misleading information is disseminated rapidly especially on social media (Hou et al., 2020; Huynh et al., 2020) . Hence, this period is a potential source of fake news. Besides Covid-19, the items in our dataset cover a wide range of domains including entertainment, sport, finance and healthcare. The result of the data collection stage is 10,007 items that are prepared for the annotation process.",
"cite_spans": [
{
"start": 334,
"end": 352,
"text": "(Hou et al., 2020;",
"ref_id": "BIBREF2"
},
{
"start": 353,
"end": 372,
"text": "Huynh et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2.1"
},
{
"text": "We recruit 23 human annotators to participate in the annotation process. The annotators receive one week training to identify fact-related posts and how to evaluate the reliability of the post based on primary features including the news source, its image and content. Figure 1 demonstrates the annotation tool interface, which is designed to support quick and easy annotation. The first section contains guideline questions to remind the annotators of the labeling criterion including the news source credibility, the language appropriateness and fact accuracy. The second section is the post content, image and influence (i.e. number of likes, comments and shares). In Section 3, the annotators select a Reliability score for the post. There is a 5-point reliability Likert scale for fact-based posts with the following labels: 1 -Unreliable, 2 -Slightly unreliable, 3 -Neutral, 4 -Slightly reliable, 5 -Reliable. On the other hand, if the post is opinion-based and does not contain facts, the annotators should select label '0 -No category' instead.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotator and Training",
"sec_num": "2.2.1"
},
{
"text": "The last section is a list of labeled items for the annotators to review and update their decision, if necessary, using the 'Undo' button.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Tool",
"sec_num": "2.2.2"
},
{
"text": "The annotation process is conducted from 9th to 19th October 2020. The annotators are divided into three groups to annotate 10,007 items independently. Therefore, each item will be annotated three times by different annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2.3"
},
{
"text": "Once the annotators finish 30,021 annotations (i.e. 10,007 items annotated three times), we filter and summarise the result based on majority vote basis. Firstly, we combine labels of the same essence: Category 1 and 2 (Unreliable and Sightly unreliable) and Category 4 and 5 (Slightly reliable and Reliable). After merging the categories, we select the majority votes to be the final labels. If the majority vote is 1 or 2, the final label should be 1 -Unreliable. If the majority vote is 4 or 5, the final label should be 0 -Reliable. When the majority vote is 3 -Neutral, we finalise using ground truth labels. Lastly, if the majority agrees that the post is not fact-based (i.e. 0 -No Category), we remove it from the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2.3"
},
{
"text": "For items with no majority votes (i.e. three annotators have different opinions), we follow an alternate procedure. If the ground truth label is 1 -unreliable, the final label should be 1. On the other hand, if the ground truth label is 0 -reliable, we double check to separate reliable news from opinion-based items. The process is illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 356,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2.3"
},
{
"text": "Once the annotation process is finished, data needs to go through the last step before being published for the competition -the content filtering. In this step, we manually check to ensure that data, includ-ing both text and image, published for the competition: Data splitting for data challenge is a difficult process in order to avoid evidence ambiguity and concept drifting which are the main cause of unstable ranking issue in data challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Filtering",
"sec_num": "2.2.4"
},
{
"text": "In this competition, we apply RDS (Nguyen et al., 2020) to split ReINTEL data into three sets including public train, validation, and private test sets. It is worth to mention that, RDS is a method to approximate optimum sampling for model diversification with ensemble rewarding to attain maximal machine learning potentials. It has a novel stochastic choice rewarding is developed as a viable mechanism for injecting model diversity in reinforcement learning.",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "(Nguyen et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content Filtering",
"sec_num": "2.2.4"
},
{
"text": "To apply RDS (Nguyen et al., 2020) for the data splitting process, it requires to have baseline learners to obtain rewards for the reinforced process. It is recommended to choose representative baseline learners, to let the reinforced learner better capture different learning behaviors. The use of these baseline learners is important since each learner will behave differently depending on the patterns contained in the target data. As a result, RDS helps to increase the diversity of the data samples in different sets. Here we employ three models to classify reliable news using textual features as follows: \u2022 Bi-LSTM (Schuster and Paliwal, 1997 ) is a bi-directional LSTM model. It has two LSTMs in which, one LSTM takes input sequence in a forward direction, and another LSTM takes input sequence in a backward direction. The use of Bi-LSTM architecture helps to increase the amount of information available to the network, to gain better performance in most of sequence related tasks. Bi-LSTM network is a standard baseline for most of text classification tasks.",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "(Nguyen et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 622,
"end": 649,
"text": "(Schuster and Paliwal, 1997",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1.1"
},
{
"text": "\u2022 CNN-Text (Kim, 2014) is the use of CNN (LeCun et al., 1989 ) network on word embeddings to perform the classification tasks. The simple architecture outperformed all other models at the publication time.",
"cite_spans": [
{
"start": 11,
"end": 22,
"text": "(Kim, 2014)",
"ref_id": "BIBREF5"
},
{
"start": 37,
"end": 60,
"text": "CNN (LeCun et al., 1989",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1.1"
},
{
"text": "\u2022 EasyEnsemble (Liu et al., 2009) is used to represent a tradition approach in dealing with im-balanced dataset. For the vectorization, we trained a Sent2Vec (Pagliardini et al., 2018) using the combined 1GB texts of Vietnamese Wikipedia data and 19 GB texts of Vuong (2018).",
"cite_spans": [
{
"start": 15,
"end": 33,
"text": "(Liu et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 158,
"end": 184,
"text": "(Pagliardini et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1.1"
},
{
"text": "To disentangle dataset shift and evidence ambiguity of the data splitting strategy, we apply RDS stochastic choice reward mechanism (Nguyen et al., 2020) to create public training, public-and private testing sets. Figure 3 illustrates the learning dynamic towards the goal. (Nguyen et al., 2020) .",
"cite_spans": [
{
"start": 132,
"end": 153,
"text": "(Nguyen et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 274,
"end": 295,
"text": "(Nguyen et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Dynamics",
"sec_num": "3.1.2"
},
{
"text": "Knowledge transfer has been found to be essential when it comes to downstream tasks with new datasets. If this transfer process is done correctly, it would greatly improve the performance of learning. Since ReINTEL challenge is a multimodal challenge, both visual based knowledge transfer and language based knowledge transfer are used by different teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Learning",
"sec_num": "4"
},
{
"text": "To be fair between participants, we required all teams to register for the use of pre-trained models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Learning",
"sec_num": "4"
},
{
"text": "Word2VecVN (Vu, 2016) x Trained on 7GB texts of Vietnamese news FastText (Vietnamese version) (Joulin et al., 2016) x",
"cite_spans": [
{
"start": 11,
"end": 21,
"text": "(Vu, 2016)",
"ref_id": "BIBREF27"
},
{
"start": 94,
"end": 115,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Language Vision Description",
"sec_num": null
},
{
"text": "Trained on Vietnamese texts of the CommonCrawl corpus ETNLP x Trained on 1GB texts of Vietnamese Wikipedia PhoBERT (Nguyen and Nguyen, 2020) x",
"cite_spans": [
{
"start": 115,
"end": 140,
"text": "(Nguyen and Nguyen, 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Language Vision Description",
"sec_num": null
},
{
"text": "Trained on 20GB texts of both Vietnamese news and Vietnamese Wikipedia Bert4News (Nha, 2020) x Trained on more than 20GB texts of Vietnamese news vElectra and ViBERT (The et al., 2020) x vElectra was trained on 10GB texts, whereas ViBERT was trained on 60GB texts of Vietnamese news VGG16 (Simonyan and Zisserman, 2015) x",
"cite_spans": [
{
"start": 81,
"end": 92,
"text": "(Nha, 2020)",
"ref_id": "BIBREF14"
},
{
"start": 159,
"end": 184,
"text": "ViBERT (The et al., 2020)",
"ref_id": null
},
{
"start": 289,
"end": 319,
"text": "(Simonyan and Zisserman, 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Language Vision Description",
"sec_num": null
},
{
"text": "Trained on ImageNet (Deng et al., 2009) YOLO (Redmon et al., 2015) x Trained on ImageNet (Deng et al., 2009 ) EfficientNet B7 (Tan and Le, 2019) x Trained on ImageNet (Deng et al., 2009) ",
"cite_spans": [
{
"start": 20,
"end": 39,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 45,
"end": 66,
"text": "(Redmon et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 89,
"end": 107,
"text": "(Deng et al., 2009",
"ref_id": "BIBREF1"
},
{
"start": 126,
"end": 144,
"text": "(Tan and Le, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 167,
"end": 186,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Language Vision Description",
"sec_num": null
},
{
"text": "For natural language processing tasks in Vietnamese, there have been many pre-trained language models are available. In 2016, Vu (2016) introduced the first monolingual pre-trained models for Vietnamese based on Word2Vec (Mikolov et al., 2013) . The use of pre-trained Word2VecVN models was proved to be useful in various tasks, such as the name entity recognition task (Vu et al., 2018) . In 2019, introduced the use of multiple pre-trained language models to achieve new state-of-the-art results in the name entity recognition task (Nguyen et al., 2019) . Up to date, there have been many other new monolingual language models for Vietnamese are available such as PhoBERT (Nguyen and Nguyen, 2020) , vElectra and ViBERT (The et al., 2020).",
"cite_spans": [
{
"start": 221,
"end": 243,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 370,
"end": 387,
"text": "(Vu et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 534,
"end": 555,
"text": "(Nguyen et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 674,
"end": 699,
"text": "(Nguyen and Nguyen, 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Transfer Learning",
"sec_num": "4.1"
},
{
"text": "Different from language models, visual models are normally universal and existing pre-trained models can be directly applied in most of image processing tasks. For the use of visual features, there is only one team using multimodal features among top 6 teams of the leader board. This team, in fact, achieved the 1 st rank on the public test (see Table 3 ); but they did not get the same rank on the private test. This hints that the reliability of news mainly depends on content of news and other meta information, such as number of likes on social networks. Moreover, it is yet to be explored to capture the reliability of news using both vision and language information.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Vision Transfer Learning",
"sec_num": "4.2"
},
{
"text": "The use of both language and vision transfer learning is important for multimodal tasks. This line of research has attracted much attention with various new language-vision models, such as VilBERT (Lu et al., 2019) , 12-in-1 (Lu et al., 2020) . No participants employ into this approach in the ReINTEL challenge due to the lack of language and vision pre-trained models in Vietnamese. Moreover, it is required to have extensive computer resources for applying this approach in a data challenge. In the future, we expect to see more research done in this direction because both images and texts are essential to SNS issues. ",
"cite_spans": [
{
"start": 197,
"end": 214,
"text": "(Lu et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 225,
"end": 242,
"text": "(Lu et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language and Vision Transfer Learning",
"sec_num": "4.3"
},
{
"text": "Each instance includes 8 main attributes with/without a binary target label. Table 2 summarizes the key features of each attribute.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Data Format",
"sec_num": "5.1"
},
{
"text": "The challenge provides approximately 8,000 training examples with the respective target labels. The testing set consists of 2,000 examples without labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training/Testing Data",
"sec_num": "5.2"
},
{
"text": "Participants must submit the result in the same order as the testing set in the following format:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result Submission",
"sec_num": "5.3"
},
{
"text": "id1, label probability 1 Id2, label probability 2 ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result Submission",
"sec_num": "5.3"
},
{
"text": "The challenge task is evaluated based on Area Under the Receiver Operating Characteristic Curve (AUC-ROC), which is a typical metric for classification tasks. Let us denote X as a continuous random variable that measures the 'classification' score of a given a news. As a binary classification task, this news could be classified as \"unreliable\" if X is greater than a threshold parameter T , and \"reliable\" otherwise. We denote f 1 (x), f 0 (x) as probability density functions that the news belongs to \"unreliable\" and \"reliable\" respectively, hence the true positive rate T P R(T ) and the false posi-tive rate F P R(T ) are computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T P R(T ) = \u221e T f 1 (x)dx (1) F P R(T ) = \u221e T f 0 (x)dx",
"eq_num": "(2)"
}
],
"section": "Evaluation Metric",
"sec_num": "5.4"
},
{
"text": "and the AUC-ROC score is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.4"
},
{
"text": "AU C ROC = \u221e \u2212\u221e T P R(T )F P R (T )dT (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.4"
},
{
"text": "Here, submissions are evaluated with ground-truth labels using the scikit-learn's implementation 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.4"
},
{
"text": "During the course two months of the competition, 61 participants sign up for the challenge. 30% of the participants compete in groups of 2 (6 teams) and 4 members (2 teams). 19 participants sign our corpus usages agreement. From top 8 of the Private test leaderboard, 6 teams/participants submit their technical reports that demonstrate their strategies and findings from the challenge. The summary of the competition participation can be seen in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Participation",
"sec_num": "5.5"
},
{
"text": "In total, 657 successful entries were recorded. The highest results of the Public test and Private test phase were 0.9427 and 0.9521 respectively. Key descriptive statistics of the results in each phase is illustrated in Table 5 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Outcomes",
"sec_num": "5.6"
},
{
"text": "The rise of misleading information on social media platforms has triggered the need for fact-checking and fake news detection. Therefore, the reliability of news has become a critical question in the modern age. In this paper, we introduce a novel dataset of nearly 10,000 SNSs entries with reliability labels. The dataset covers a great variety of topics ranging from healthcare to entertainment and economics. The annotation and validation process are presented in details with several filtering rounds. With both linguistic and visual features, we believe that the corpus is suitable for future research on fake news detection and news distributor behaviours using NLP and computer vision techniques. In Vietnam, where datasets on SNSs are scarce, our corpus will serve as a reliable material for other research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://wearesocial.com/digital-2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://scikit-learn.org/stable/ modules/generated/sklearn.metrics.roc_ auc_score.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://reml.ai",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the InfoRE company for the data contribution, the ReML-AI research group 4 for the data contribution and financial support, and the twenty three annotators for their hard work to support the shared task. Without their support, the task would not have been possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Social media and fake news in the 2016 election",
"authors": [
{
"first": "Hunt",
"middle": [],
"last": "Allcott",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gentzkow",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of economic perspectives",
"volume": "31",
"issue": "2",
"pages": "211--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hunt Allcott and Matthew Gentzkow. 2017. Social me- dia and fake news in the 2016 election. Journal of economic perspectives, 31(2):211-36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ImageNet: A Large-Scale Hierarchical Image Database",
"authors": [
{
"first": "J",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "L.-J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "CVPR09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assessment of public attention, risk perception, emotional and behavioural responses to the covid-19 outbreak: social media surveillance in china. Risk Perception, Emotional and Behavioural Responses to the COVID-19 Outbreak: Social Media",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Fanxing",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Leesa",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyuan Hou, Fanxing Du, Hao Jiang, Xinyu Zhou, and Leesa Lin. 2020. Assessment of public at- tention, risk perception, emotional and behavioural responses to the covid-19 outbreak: social me- dia surveillance in china. Risk Perception, Emo- tional and Behavioural Responses to the COVID- 19 Outbreak: Social Media Surveillance in China (3/6/2020).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The covid-19 risk perception: A survey on socioeconomics and media attention",
"authors": [
{
"first": "Toan Luu",
"middle": [],
"last": "Huynh",
"suffix": ""
}
],
"year": 2020,
"venue": "Econ. Bull",
"volume": "40",
"issue": "1",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Luu Huynh et al. 2020. The covid-19 risk percep- tion: A survey on socioeconomics and media atten- tion. Econ. Bull, 40(1):758-764.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fasttext.zip: Compressing text classification models",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "H\u00e9rve",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03651"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H\u00e9rve J\u00e9gou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Backpropagation applied to handwritten zip code recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Boser",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Denker",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Howard",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hubbard",
"suffix": ""
},
{
"first": "L",
"middle": [
"D"
],
"last": "",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural Computation",
"volume": "1",
"issue": "4",
"pages": "541--551",
"other_ids": {
"DOI": [
"10.1162/neco.1989.1.4.541"
]
},
"num": null,
"urls": [],
"raw_text": "Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. Back- propagation applied to handwritten zip code recog- nition. Neural Computation, 1(4):541-551.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploratory undersampling for class-imbalance learning",
"authors": [
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)",
"volume": "39",
"issue": "",
"pages": "539--550",
"other_ids": {
"DOI": [
"10.1109/TSMCB.2008.2007853"
]
},
"num": null,
"urls": [],
"raw_text": "X. Liu, J. Wu, and Z. Zhou. 2009. Exploratory under- sampling for class-imbalance learning. IEEE Trans- actions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(2):539-550.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. CoRR, abs/1908.02265.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "12-in-1: Multi-task vision and language representation learning",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In The IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "PhoBERT: Pre-trained language models for Vietnamese",
"authors": [
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1037--1042",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1037-1042.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Reinforced data sampling for model diversification",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Xuan-Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Quoc-Tuan",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Duc-Trong",
"middle": [],
"last": "Truong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoang D. Nguyen, Xuan-Son Vu, Quoc-Tuan Truong, and Duc-Trong Le. 2020. Reinforced data sampling for model diversification.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vlsp shared task: Named entity recognition",
"authors": [
{
"first": "Huyen",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Quyen",
"middle": [],
"last": "Ngo",
"suffix": ""
},
{
"first": "Luong",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Vu",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Hien",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Computer Science and Cybernetics",
"volume": "34",
"issue": "4",
"pages": "283--294",
"other_ids": {
"DOI": [
"10.15625/1813-9663/34/4/13161"
]
},
"num": null,
"urls": [],
"raw_text": "Huyen Nguyen, Quyen Ngo, Luong Vu, Vu Tran, and Hien Nguyen. 2019. Vlsp shared task: Named en- tity recognition. Journal of Computer Science and Cybernetics, 34(4):283-294.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pre-trained bert4news",
"authors": [
{
"first": "",
"middle": [],
"last": "Nguyen Van Nha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen Van Nha. 2020. Pre-trained bert4news.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features",
"authors": [
{
"first": "Matteo",
"middle": [],
"last": "Pagliardini",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL 2018 -Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised Learning of Sentence Embed- dings using Compositional n-Gram Features. In NAACL 2018 -Conference of the North American Chapter of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "You only look once: Unified, real-time object detection",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Redmon",
"suffix": ""
},
{
"first": "Santosh",
"middle": [],
"last": "Kumar Divvala",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"B"
],
"last": "Girshick",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. 2015. You only look once: Unified, real-time object detection. CoRR, abs/1506.02640.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Csi: A hybrid deep model for fake news detection",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Ruchansky",
"suffix": ""
},
{
"first": "Sungyong",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17",
"volume": "",
"issue": "",
"pages": "797--806",
"other_ids": {
"DOI": [
"10.1145/3132847.3132877"
]
},
"num": null,
"urls": [],
"raw_text": "Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17, page 797-806, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {
"DOI": [
"10.1109/78.650093"
]
},
"num": null,
"urls": [],
"raw_text": "M. Schuster and K. K. Paliwal. 1997. Bidirectional re- current neural networks. IEEE Transactions on Sig- nal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "defend: Explainable fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Limeng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "395--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019a. defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 395-405.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fake news detection on social media: A data mining perspective",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Sliva",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "19",
"issue": "1",
"pages": "22--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social me- dia: A data mining perspective. ACM SIGKDD ex- plorations newsletter, 19(1):22-36.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Beyond news contents: The role of social context for fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "312--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Suhang Wang, and Huan Liu. 2019b. Beyond news contents: The role of social context for fake news detection. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 312-320.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Vietnam passes cyber security law",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Son",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuan Son. 2018. Vietnam passes cyber security law.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficientnet: Rethinking model scaling for convolutional neural networks",
"authors": [
{
"first": "Mingxing",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingxing Tan and Quoc V. Le. 2019. Efficientnet: Re- thinking model scaling for convolutional neural net- works. CoRR, abs/1905.11946.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving sequence tagging for vietnamese text using transformer-based neural models",
"authors": [
{
"first": "Oanh",
"middle": [],
"last": "Viet Bui The",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Tran Thi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viet Bui The, Oanh Tran Thi, and Phuong Le-Hong. 2020. Improving sequence tagging for vietnamese text using transformer-based neural models.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Vncorenlp: A vietnamese natural language processing toolkit",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 NAACL: Demonstrations",
"volume": "",
"issue": "",
"pages": "56--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark Dras, and Mark Johnson. 2018. Vncorenlp: A vietnamese natural language processing toolkit. In Proceedings of the 2018 NAACL: Demonstrations, pages 56-60, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Pre-trained word2vec models for vietnamese",
"authors": [
{
"first": "Xuan-Son",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuan-Son Vu. 2016. Pre-trained word2vec models for vietnamese. https://github.com/sonvx/ word2vecVN.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Etnlp: A visual-aided systematic approach to select pre-trained embeddings for a downstream task",
"authors": [
{
"first": "Xuan-Son",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Thanh",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Son",
"middle": [
"N"
],
"last": "Tran",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuan-Son Vu, Thanh Vu, Son N. Tran, and Lili Jiang. 2019. Etnlp: A visual-aided systematic approach to select pre-trained embeddings for a downstream task. In: Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing (RANLP).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Who coronavirus disease (covid-19) dashboard",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WHO. 2020. Who coronavirus disease (covid-19) dashboard.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Unsupervised fake news detection on social media: A generative approach",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "5644--5651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Yang, Kai Shu, Suhang Wang, Renjie Gu, Fan Wu, and Huan Liu. 2019. Unsupervised fake news detection on social media: A generative approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5644-5651.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Fake news: Fundamental theories, detection strategies and challenges",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the twelfth ACM international conference on web search and data mining",
"volume": "",
"issue": "",
"pages": "836--837",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Zhou, Reza Zafarani, Kai Shu, and Huan Liu. 2019. Fake news: Fundamental theories, detection strategies and challenges. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 836-837.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 2: Data Annotation Process"
},
"TABREF1": {
"text": "1. Does not violate any law, statue, ordinance, or regulation",
"content": "<table><tr><td>2. Will not give rise to any claims of invasion of</td></tr><tr><td>privacy or publicity</td></tr><tr><td>3. Does not contain, depict, include or involve</td></tr><tr><td>any of the following:</td></tr><tr><td>\u2022 Political or religious views or other such</td></tr><tr><td>ideologies</td></tr><tr><td>\u2022 Explicit or graphic sexual activity</td></tr><tr><td>\u2022 Vulgar or offensive language and/or sym-</td></tr><tr><td>bols or content</td></tr><tr><td>\u2022 Personal information of individuals such</td></tr><tr><td>as names, telephone numbers, and ad-</td></tr><tr><td>dresses</td></tr><tr><td>\u2022 Other forms of ethical violations</td></tr><tr><td>3 The ReINTEL 2020 Challenge</td></tr><tr><td>3.1 Dataset Splitting</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"text": "List of pre-trained models registered by all participants of ReINTEL challenge in 2020.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"text": "",
"content": "<table><tr><td>: Data attributes</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"text": "Top 6 teams on public-test and private-test with submitted papers and their final approaches. The rank is based on the ROC-AUC scores on the private-test.",
"content": "<table><tr><td># Team</td><td colspan=\"2\">ROC-AUC Public-test Private-test</td><td>Final Approach</td><td colspan=\"2\">Ensemble? Multimodal?</td></tr><tr><td>1 Kurtosis</td><td>0.9399</td><td>0.9521</td><td>TF-IDF + SVD; Emb + SVD; NB, Light-</td><td>Yes</td><td>No</td></tr><tr><td/><td/><td/><td>GBM, CatBoost</td><td/><td/></tr><tr><td>2 NLP BK</td><td>0.9360</td><td>0.9513</td><td>Bert4News + phoBERT + XLM + MetaFea-</td><td>Yes</td><td>No</td></tr><tr><td/><td/><td/><td>tures</td><td/><td/></tr><tr><td>3 SunBear</td><td>0.9418</td><td>0.9462</td><td>RoBerta + MLP</td><td>Yes</td><td>No</td></tr><tr><td>4 uit kt</td><td>-</td><td>0.9452</td><td>phoBERT + Bert4News</td><td>Yes</td><td>No</td></tr><tr><td>5 Toyo-Aime</td><td>0.9427</td><td>0.9449</td><td>CNN + Bert + Fully connected</td><td>Yes</td><td>Yes</td></tr><tr><td>6 ZaloTeam</td><td>-</td><td>0.9378</td><td>viBERT + viELECTRA + phoBERT</td><td>Yes</td><td>No</td></tr><tr><td>Metric</td><td/><td>Value</td><td/><td/><td/></tr><tr><td colspan=\"2\">Number of participants</td><td>61</td><td/><td/><td/></tr><tr><td>Number of teams</td><td/><td>8</td><td/><td/><td/></tr><tr><td colspan=\"3\">Number of signed agreements 19</td><td/><td/><td/></tr><tr><td colspan=\"2\">Number of submitted papers</td><td>6</td><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF7": {
"text": "Participation summary Public Test Private Test Overall",
"content": "<table><tr><td colspan=\"2\">Total Entries 571</td><td>86</td><td>657</td></tr><tr><td colspan=\"2\">Highest ROC 0.9427</td><td>0.9521</td><td>0.9474</td></tr><tr><td>Mean ROC</td><td>0.8463</td><td>0.8942</td><td>0.8703</td></tr><tr><td>Std. ROC</td><td>0.1215</td><td>0.1022</td><td>0.1119</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF8": {
"text": "Results summary",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}