ACL-OCL / Base_JSON /prefixV /json /vlsp /2020.vlsp-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
57 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:02.111410Z"
},
"title": "ReINTEL Challenge 2020: Exploiting Transfer Learning Models for Reliable Intelligence Identification on Vietnamese Social Network Sites",
"authors": [
{
"first": "Thi-Thanh",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Information Technology",
"location": {
"settlement": "Ho Chi Minh City",
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Kiet",
"middle": [
"Van"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Information Technology",
"location": {
"settlement": "Ho Chi Minh City",
"country": "Vietnam"
}
},
"email": "kietnv@uit.edu.vn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the system that we propose for the Reliable Intelligence Identification on Vietnamese Social Network Sites (ReINTEL) task of the Vietnamese Language and Speech Processing 2020 (VLSP 2020) Shared Task. In this task, the VLSP 2020 provides a dataset with approximately 6,000 training news/posts annotated with reliable or unreliable labels, and a test set consists of 2,000 examples without labels. In this paper, we conduct experiments on different transfer learning models, which are bert4news and PhoBERT fine-tuned to predict whether the news is reliable or not. In our experiments, we achieve the AUC score of 94.52% on the private test set from ReIN-TEL's organizers.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the system that we propose for the Reliable Intelligence Identification on Vietnamese Social Network Sites (ReINTEL) task of the Vietnamese Language and Speech Processing 2020 (VLSP 2020) Shared Task. In this task, the VLSP 2020 provides a dataset with approximately 6,000 training news/posts annotated with reliable or unreliable labels, and a test set consists of 2,000 examples without labels. In this paper, we conduct experiments on different transfer learning models, which are bert4news and PhoBERT fine-tuned to predict whether the news is reliable or not. In our experiments, we achieve the AUC score of 94.52% on the private test set from ReIN-TEL's organizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the explosion of The Fourth Industrial Revolution in Vietnam, SNSs such as Facebook, Zalo, Lotus have attracted a huge number of users. SNSs have become an essential means for users to not only connect with friends but also freely share information and news. In the context of the COVID-19 pandemic, as well as prominent political and economic events that are of great interest to many people, some people tend to distribute unreliable information for personal purposes. The discovery of unreliable news has received considerable attention in recent times. Therefore, VLSP opens ReIN-TEL (Le et al., 2020) shared-task with the purpose of identifying being shared unreliable information on Vietnamese SNSs.",
"cite_spans": [
{
"start": 593,
"end": 610,
"text": "(Le et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Censoring news to see if it is trustworthy is tedious and frustrating. It is sometimes difficult to determine whether the news is credible or not. Fake news discovery has been studied more and more by academic researchers as well as social networking companies such as Facebook and Twitter. Many shared-task to detect rumors were held, such as SemEval-2017 Task 8: Determining rumour veracity and support for rumours (Derczynski et al., 2017) and SemEval-2019 Task 7: RumourEval, Determining Rumour Veracity and Support for Rumours (Gorrell et al., 2019) .",
"cite_spans": [
{
"start": 417,
"end": 442,
"text": "(Derczynski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 532,
"end": 554,
"text": "(Gorrell et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this task, we focus on finding a solution to categorize unreliable news collected in Vietnamese, which is a low-resource language for natural language preprocessing. Specifically, we implement deep learning and transfer learning methods to classify SNSs news/posts. The problem is stated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Input: Given a Vietnamese news/post on SNSs with the text of news/post (always available), some relative information, and image (may be missing).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Output: One of two labels (unreliable or reliable) that are predicted by our system. Figure 1 shows an example of this task. The rest of the paper is organized as follows. In Section 2, we present the related work. In Section 3, we explain some proposed approaches and its result. In Section 4, we present the experimental analysis. Finally, Section 5 draws conclusions and future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ruchansky et al. 2017used a hybrid model called CSI to categorize real and fake news. The CSI model includes three components: capture, source, and integrate. The first module is used to detect a user's pattern of activity on news feeds. The second module learns the source characteristics of user behavior. The last module combines both previous modules to categorize news is real or fake. The CSI model does not make assumptions about user behavior or posts, although it uses both user-profiles and article data for classification. Slovikovskaya 2019focused on improving the results of the Fake News Challenge Stage 1 (FNC-1) stance detection task using transfer learning. Specifically, this work improved the FNC-1 best performing model adding BERT (Devlin et al., 2018) sentence embedding of input sequences as a model feature and fine-tuned XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019b) transformers on FNC-1 extended dataset.",
"cite_spans": [
{
"start": 752,
"end": 773,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 852,
"end": 871,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 884,
"end": 903,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In this study, we concentrate on SOTA models, including deep neural network models and transfer learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches",
"sec_num": "3"
},
{
"text": "In studying the fundamental theories and methods of detecting fake news, Zhou and Zafarani (2020) have come up with some fundamental theories of detecting fake news. The authors wrote, \"theories have implied that fake news potentially differs from the truth in terms of, e.g., writing style and quality (by Undeutsch hypothesis)\". Therefore, we choose text-feature as the primary input of our experimental models. Firstly, we run deep learning models like Text CNN (Kim, 2014) , BiLSTM (Zhou et al., 2016) combine with some pre-trained word embed- ding models such as FastText 1 (Bojanowski et al., 2016) and PhoW2V 2 (Tuan to predict the credibility of news. The results of this approach get an AUC score of 0.84 to 0.86, as shown in Table 1 . We also plan to experiment with incorporating other features that ReINTEL's organizers provide, such as user id, the number of likes, shares, comments, and image, but the lack of information (shown in Table 2 ) leads to enormous dynamic causes us to ignore this approach.",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "Zhou and Zafarani (2020)",
"ref_id": "BIBREF19"
},
{
"start": 465,
"end": 476,
"text": "(Kim, 2014)",
"ref_id": "BIBREF5"
},
{
"start": 486,
"end": 505,
"text": "(Zhou et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 579,
"end": 604,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 735,
"end": 742,
"text": "Table 1",
"ref_id": null
},
{
"start": 946,
"end": 953,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Deep neural network models",
"sec_num": "3.1.1"
},
{
"text": "One of the problems of deep learning is its massive data requirements as well as the need for computing resources. This has spurred the development of large models and transfer learning methods. Nguyen et al. (2020) presents two BERT finetuning methods for the sentiment analysis task on datasets of Vietnamese reviews and gets slightly outperforms other models using GloVe and Fast-Text. Liu et al. (2019a) fine-tuned BERT under the multi-task learning framework and obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7% (an improvement of 2.2%) 3 . Therefore, we attempt to fine-tune PhoBERT 4 (Nguyen and Tuan Nguyen, 2020) and bert4news 5 , pre-trained models for Vietnamese which is based on BERT architecture. And transfer learning shows strength in these experi-Model AUC PhoBERT 0.932424 bert4news 0.935163 PhoBERT+bert4news 0.945169 Table 3 : Results of PhoBERT, bert4news, and results that combine these two models on the private test set. ments, we get an AUC score of between 0.92 to almost 0.95, as shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 938,
"end": 945,
"text": "Table 3",
"ref_id": null
},
{
"start": 1117,
"end": 1124,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT and RoBERTa for Vietnamese",
"sec_num": "3.1.2"
},
{
"text": "After many experiments, we find that the deep learning models do not achieve higher performance than fine-tuned bert4news and PhoBERT. Therefore, we decided to focus on only improving the results on transfer learning methods. Besides, we also try to combine the results of these two models. The fine-tuning idea is taken from the study (Sun et al., 2019) . The BERT base model creates an architecture of 12 sub-layers in the encoder, 12 heads in multi-head attention on each sub-layer. BERT input is a sequence of not more than 512 tokens; the output is a set of self-attention vectors equal to the input length. Each vector is 768 in size. The BERT input string represents both single text and text pairs explicitly, where a special token [CLS] is used for string sorting tasks, and a special token [SEP] marks the end position of the single text or the position that separates the text pair. For finetuning the BERT architecture for text classification, we concatenated the last four hidden representations of the [CLS] token, which will be passed into a small MLP network containing the full connection layers to transform into the distribution of discrete label values.",
"cite_spans": [
{
"start": 336,
"end": 354,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 740,
"end": 745,
"text": "[CLS]",
"ref_id": null
},
{
"start": 1016,
"end": 1021,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning BERT and RoBERTa for Vietnamese",
"sec_num": "3.2"
},
{
"text": "Our fine-tuning process consists of two main steps: tokenize the text content and retrain the model on the dataset. For PhoBERT, we use VN-coreNLP (Vu et al., 2018) library to tokenize content, while for bert4news, we use BertTokenizer.",
"cite_spans": [
{
"start": 147,
"end": 164,
"text": "(Vu et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning BERT and RoBERTa for Vietnamese",
"sec_num": "3.2"
},
{
"text": "In this paper, we conduct various experiments on Google Colab (CPU: Intel(R) Xeon(R) CPU @ 2.20GHz; RAM: 12.75 GB; GPU Tesla P100 or T4 16GB with CUDA 10.1). We fine-tune PhoBERT and bert4news with different parameters as batch size, learning rate, epoch, random seed. To save time and cost, we set batch size 32 for all models. With the same hyperparameter values, distinct random seeds can lead to substantially different results (Dodge et al., 2020) . With the above configuration, we spend about 2.40 minutes per epoch for both bert4news and PhoBERT. Table 4 shows the parameter setting and the performance, respectively. Figure 2 shows the results of our testing process. It is easy to see that our results are not stable in the first phase due to trying many methods. Our results are more stable in the later stage of the competition, but there are not many mutations.",
"cite_spans": [
{
"start": 432,
"end": 452,
"text": "(Dodge et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 555,
"end": 562,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 626,
"end": 634,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "4.1"
},
{
"text": "In summary, we have proposed the following methods for classifying untrustworthy news: combining deep learning model with pre-trained word embedding, fine-tune bert4news, and PhoBERT, combining text, numeric, and visual features. Accordingly, the best result belongs to the transfer learning models when achieving an AUC score of 94.52% for the combined model of bert4news and PhoBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "In the future, we plan to combine other features offered by ReINTEL's organizers with transfer learning models due to classifying based on news content alone is not enough (Shu et al., 2019) . While we are doing well in transfer learning, we also aim to build a system for the fast and accu-rate detection of fake news at the early stages of propagation, which is much more complicated than detecting long-circulated news. Besides, we hope to develop a system to score users based on the news they post and share to reduce unreliable news on Vietnam SNSs.",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Shu et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "https://fasttext.cc/docs/en/crawl-vectors.html 2 https://github.com/datquocnguyen/PhoW2V 3 As of February 25, 2019 on the latest GLUE test set 4 https://github.com/VinAIResearch/PhoBERT 5 https://github.com/bino282/bert4news",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR, abs/1607.04606.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
},
{
"first": "Geraldine",
"middle": [],
"last": "Wong Sak",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Hoi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zubiaga",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2006"
]
},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for ru- mours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69-76, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Ilharco",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stop- ping.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours",
"authors": [
{
"first": "Genevieve",
"middle": [],
"last": "Gorrell",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "845--854",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2147"
]
},
"num": null,
"urls": [],
"raw_text": "Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and sup- port for rumours. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 845-854, Minneapolis, Minnesota, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reintel: A multimodal data challenge for responsible information identification on social network sites",
"authors": [
{
"first": "Duc-Trong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Xuan-Son",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Nhu-Dung",
"middle": [],
"last": "To",
"suffix": ""
},
{
"first": "Huu-Quang",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thuy-Trinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Linh",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Anh-Tuan",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Minh-Duc",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Nghia",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Huyen",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Hoang",
"middle": [
"D"
],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duc-Trong Le, Xuan-Son Vu, Nhu-Dung To, Huu- Quang Nguyen, Thuy-Trinh Nguyen, Linh Le, Anh- Tuan Nguyen, Minh-Duc Hoang, Nghia Le, Huyen Nguyen, and Hoang D. Nguyen. 2020. Reintel: A multimodal data challenge for responsible informa- tion identification on social network sites.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multi-task deep neural networks for natural language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural net- works for natural language understanding. CoRR, abs/1901.11504.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "PhoBERT: Pre-trained language models for Vietnamese",
"authors": [],
"year": null,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1037--1042",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.92"
]
},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 1037- 1042, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Ngoc Hoang Luong, and Quoc Hung Ngo. 2020. Fine-tuning bert for sentiment analysis of vietnamese reviews",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Thai Nguyen, Thoai Linh Nguyen, Ngoc Hoang Luong, and Quoc Hung Ngo. 2020. Fine-tuning bert for sentiment analysis of vietnamese reviews.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "CSI: A hybrid deep model for fake news",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Ruchansky",
"suffix": ""
},
{
"first": "Sungyong",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. CSI: A hybrid deep model for fake news. CoRR, abs/1703.06959.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Beyond news contents: The role of social context for fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19",
"volume": "",
"issue": "",
"pages": "312--320",
"other_ids": {
"DOI": [
"10.1145/3289600.3290994"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Suhang Wang, and Huan Liu. 2019. Beyond news contents: The role of social context for fake news detection. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19, page 312-320, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Transfer learning from transformers to fake news challenge stance detection (FNC-1) task. CoRR",
"authors": [
{
"first": "Valeriya",
"middle": [],
"last": "Slovikovskaya",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valeriya Slovikovskaya. 2019. Transfer learning from transformers to fake news challenge stance detection (FNC-1) task. CoRR, abs/1910.14353.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "How to fine-tune BERT for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classifica- tion? CoRR, abs/1905.05583.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A pilot study of text-to-SQL semantic parsing for Vietnamese",
"authors": [
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mai",
"middle": [
"Hoang"
],
"last": "Dao",
"suffix": ""
},
{
"first": "Dat Quoc",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4079--4085",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.364"
]
},
"num": null,
"urls": [],
"raw_text": "Anh Tuan Nguyen, Mai Hoang Dao, and Dat Quoc Nguyen. 2020. A pilot study of text-to-SQL se- mantic parsing for Vietnamese. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4079-4085, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "VnCoreNLP: A Vietnamese natural language processing toolkit",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "56--60",
"other_ids": {
"DOI": [
"10.18653/v1/N18-5012"
]
},
"num": null,
"urls": [],
"raw_text": "Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark Dras, and Mark Johnson. 2018. VnCoreNLP: A Vietnamese natural language processing toolkit. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Demonstrations, pages 56-60, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Text classification improved by integrating bidirectional LSTM with two-dimensional max pooling",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Suncong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaming",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hongyun",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016. Text classification improved by integrating bidirectional LSTM with two-dimensional max pooling. CoRR, abs/1611.06639.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A survey of fake news: Fundamental theories, detection methods, and opportunities",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Comput. Surv",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3395046"
]
},
"num": null,
"urls": [],
"raw_text": "Xinyi Zhou and Reza Zafarani. 2020. A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Comput. Surv., 53(5).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Timestamp post: 1584426000. Number of post's like: 45. Number of post's comment: 15. Number of post's share: 8. Label: 1 (unreliable). Image: NAN."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "An example extracted from the dataset."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Performances of the team during the challenging task."
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Id: 0.</td></tr><tr><td>User id: 2167074723833130000.</td></tr><tr><td>Post message:</td></tr></table>",
"text": "C\u1ea7n c\u00e1c b\u1eadc ph\u1ee5 huynh x\u00e3 Ng\u0169 Th\u00e1i l\u00ean ti\u1ebfng, kh\u00f4ng ng\u1edd x\u00e3 m\u00ecnh c\u0169ng nh\u1eadn th\u1ecbt nhi\u1ec5m s\u00e1n... Cho c\u00e1c ch\u00e1u M\u1ea7m non \u0103n u\u1ed1ng th\u1ebf n\u00e0y th\u1eadt v\u00f4 nh\u00e2n t\u00ednh! VTV \u0111\u0103ng tin r\u1ed3i nh\u00e9 c\u00e1c anh ch\u1ecb. English translation: Needing the parents of Ngu Thai commune to speak up, astonishing my commune accept contaminated meat ... Feeding preschool children like this is so inhumane! VTV posted the news, guys.",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"text": "Statistics of missing values in the dataset.",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Parameter changes lead to a change of results on the public test set.",
"num": null,
"html": null
}
}
}
}