ACL-OCL / Base_JSON /prefixV /json /vlsp /2020.vlsp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:01.640519Z"
},
"title": "Vietnamese Relation Extraction with BERT-based Models at VLSP 2020",
"authors": [
{
"first": "Thuat",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hanoi University of Science and Technology",
"location": {
"settlement": "Hanoi",
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Man",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hanoi University of Science and Technology",
"location": {
"settlement": "Hanoi",
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Duc",
"middle": [],
"last": "Trong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hanoi University of Science and Technology",
"location": {
"settlement": "Hanoi",
"country": "Vietnam"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years, BERT-based models have achieved the state-of-the-art performance over many Natural Language Language tasks. Because of that, BERT-based model becomes a trend and is widely used for so many NLP task. And in this paper, we present our approach on how we apply BERT-based model to Relation Extraction shared-task of VLSP 2020 campaign. In detail, we present: (1) our general idea to solve this task; (2) how we preprocess data to fit with the idea and to yield better result; (3) how we use BERT-based models for Relation Extraction task; and (4) our experiment and result on public development data and private test data of VLSP 2020.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years, BERT-based models have achieved the state-of-the-art performance over many Natural Language Language tasks. Because of that, BERT-based model becomes a trend and is widely used for so many NLP task. And in this paper, we present our approach on how we apply BERT-based model to Relation Extraction shared-task of VLSP 2020 campaign. In detail, we present: (1) our general idea to solve this task; (2) how we preprocess data to fit with the idea and to yield better result; (3) how we use BERT-based models for Relation Extraction task; and (4) our experiment and result on public development data and private test data of VLSP 2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Nowadays, Natural Language Processing (NLP) is a very interesting and necessary field of research. The results of the works in the field of natural language processing can bring many benefits to human. As an interesting task in the field of NLP research, the result of Information Extraction (IE) works in general and Relation Extraction (RE) works in particular can help people a lot on automating text processing tasks. However, compared to other popular languages (e.g., English, Chinese), evaluations and research results for Relation Extraction in Vietnamese language are still limited. In this year's international workshop on Vietnamese Language and Speech Processing (VLSP 2020) 1 , for the first time, there is a shared task about Relation Extraction in Vietnamese. This is really great as it means that Relation Extraction in Vietnamese is gaining more attention from the research and industry communities. In the Relation Extraction shared task in VLSP Campaign 2020, organizers will release training, development and test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 https://vlsp.org.vn/vlsp2020/eval Training and development data contain Vietnamese electronic newspapers, labeled entity types of all entity mentions in the articles (there are only three types of entity entities) and labeled relations between entity mentions that belong to the same sentence. In the meantime, the test data also contains the similar information contained in the training and development data (newspapers and entity mentions), but will not be provided with the labels of relation between entities. And participating groups are asked to build learning systems based on training and development data, capable of predicting the relationship labels between entities belonging to the same sentence in the test data. And in the next section of this paper, we describe in detail about VLSP 2020 RE task's dataset, how we preprocess the data and about our BERT-based model's architecture that we use for this year's VLSP RE task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All three sets of data (training, development and test) contain files in WebAnno TSV 3.2 File format 2 . Each file only contains one raw document (electronic newspapers) that has not been split into sentences. There are three types of Named Entities (NE): Locations (LOC), Organizations (ORG), and Persons (PER). And four types of relation between annotated entities; three of four relation types are directed and the last one is undirected. These relation types are described in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 487,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "The detailed information is given in the VLSP 2020 RE task's page 3 and the annotation guideline of this task. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "In this section, we describe our general idea about how we process data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "\u2022 We need to split original raw documents by sentences since the dataset contains only prelabeled relationships between entities belonging to the same sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "\u2022 Assuming that there are total n entities in a sentence, we create n(n\u22121) 2 sentences corresponding to n(n\u22121) 2 pairs of entities. Each of these sentences is a data point that is passed to our BERT-based model later. The label for each data point is the relation label between the pair of entities in this sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "\u2022 These are four types of relation. Three of them are directed, so we create new two undirected relations for each directed relations, depending on whether the directed relation label is on the preceding or following entity in the sentence. See below EXAMPLE I and EX-AMPLE II for more clarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "EXAMPLE 1: In the sentence: \"H\u00e0 N\u1ed9i l\u00e0 th\u1ee7 \u0111\u00f4 c\u1ee7a Vi\u1ec7t Nam\", the relation between two entities (\"H\u00e0 N\u1ed9i\" and \"Vi\u1ec7t Nam\") is PART-WHOLE. This relation label is on the \"Vi\u1ec7t Nam\" entity, which is the entity that comes after in the sentence. We set this data point's label to PART-WHOLE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "EXAMPLE 2: In the sentence: \"Vi\u1ec7t Nam c\u00f3 th\u1ee7 \u0111\u00f4 l\u00e0 H\u00e0 N\u1ed9i\", the relation between two entities (\"H\u00e0 N\u1ed9i\" and \"Vi\u1ec7t Nam\") is PART-WHOLE. This relation label is on \"H\u00e0 N\u1ed9i\" entity, which is the entity that comes first in the sentence. We set this data point's label to WHOLE-PART.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "\u2022 There are many entities in the same sentence but there are no relations between them, so we create a new type of relation called \"OTH-ERS\" for them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "\u2022 Finally, we pass these data points into our BERT-based model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "In the end, we have a total of seven types of relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Idea",
"sec_num": "2.2"
},
{
"text": "This section presents details on how we preprocess data. Because the dataset contains only prelabeled relationships between entities belonging to the same sentence. So we need to split original raw documents by sentences. To do that, we try to use two of the best libraries out there for Vietnamese language processing: VnCoreNLP 4 (VNC) and Underthesea 5 (UTS). In our own experiment, Underthesea seem better to us when compared to VnCoreNLP:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 VNC has problems with Unicode normalized: \"Thanh Th\u1ee7y\" will be \"Thanh Thu\u1ef7\". While UTS seem to have better Unicode normalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 VNC has problems with splitting a correct sentence into two sentences. While UTS seems or very rarely has this problem. It is quite hard for us to fix this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 VNC can split sentences perfectly by some characters like single dot, three dots . . . while UTS sometimes does not split sentences by these characters. However, we can find, and fix these sentences easily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "Besides, there are some other small problems when we use these two libraries. But results from Underthesea seem to be better than results from VnCoreNLP. So we decide to use Underthesea for preprocessing data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "We follow the following steps to preprocess data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 Normalize data with \"NFC\" form. \u2022 Using Underthesea to split raw documents to sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "<s> w 1 w 2 w 3 ... w n-1 w n ... w m ... w k-2 w k-1 w k </s>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 Find and review sentences contain characters like: dot, three dots. However, these characters are not the ending characters of these sentences. Then if there are mistakes in these sentences (two different sentences are combined into a single sentence), we will split these sentences using rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 Split sentences by colon punctuation using rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 Remove characters that are not alphanumeric (either alphabets or numbers) at the beginning or at the end of an entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "\u2022 Fix the problem with faulty Word Segmentation of Underthesea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "Besides, we also do some other preprocess steps like: Check and fix if there is a relation between entities belonging to different sentences to make sure that data extracted from raw data is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing data",
"sec_num": "2.3"
},
{
"text": "In this section, we present our BERT-base model's architecture. We use two BERT-based models that support Vietnamese language: PhoBERT (PB) (Nguyen and Tuan Nguyen, 2020) and XLM-RoBERTa (XLMR) (Conneau et al., 2019) . We use these two BERT-based models to generate embedding vectors for each pair of entities of each sentence. Then we combine (using pooling methods) these embeddings into one single embedding vector, and pass it into a multi layer neural network with seven (the number relation types) units and Softmax activation function in the last layer. The architecture of our model is shown in Figure 1 .",
"cite_spans": [
{
"start": 194,
"end": 216,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 603,
"end": 611,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "About details, we follow the following steps to process sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "\u2022 We pass sentences into the BERT-based models to generate embedding vectors for each pair of entities of each sentence. We try to use both of two BERT-base models PB and XLMR; we also try to use only PB or only XLMR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "\u2022 In particular, each entity may have multiple word pieces. So with each entity's word pieces, we try to use and combine embeddings of it from different BERT layers to only one single embedding vector for that word piece. We tried several combinations like: concating embeddings from the last four layers, element wise max pooling embeddings from the last two layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "\u2022 Then, with each entity, we do the same process like each entity's word pieces to generate only one single embedding vector for an entity from its word pieces embedding vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "\u2022 Each sentence has two entities, so we have two embedding vectors. Let the first entity's embedding vector be h 1 ; the second entity's embedding vector be h 2 . From these two vectors, we generate one single embedding vector for the current sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "[h 1 , h 2 ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "\u2022 Each PB and XLMR model have its own final sentence embedding. In the combination model of PB and XLMR, we concatenate two sentence embedding of these two models to obtain one single sentence embedding vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "\u2022 Finally, we pass the final sentence embedding vector to a multi layer neural network with seven (relation types amount) units and Softmax activation function in the last layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based model",
"sec_num": "2.4"
},
{
"text": "In our experiments, we try to use only one of the two BERT-based models (PB or XLMR) and compare with using both models, but using both models always gives much better results. We use Google Colab 6 GPU for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "3"
},
{
"text": "Since the maximum GPU memory of Colab is 16GB, our biggest model is a combination of fine-tuned PB base model with non fine-tuned XLMR Large (Model 1). We found that if we fine tune PB with high epoch numbers (about 8) and with small learning rate of E-05 can give results that are close to the best we have ever had. And the model results seem more stable when using average pooling instead of using max pooling. Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 414,
"end": 421,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "3"
},
{
"text": "All of our three best models using PB base and XLMR base model, with PB base is finetuned with a learning rate of E-05. Our worst model on the development data (Model 3) give the best result on the private data. We think that two other models may too overfit on the training data tuning on public development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "3"
},
{
"text": "With results in Table 2 , we achieved the best result with Model 3, ranking the 1st of the scoreboard on the private test set of Relation Extraction shared-task at VLSP 2020 campaign.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "3"
},
{
"text": "In this paper, we have presented our approach to solve the Relation Extraction task proposed at the VLSP Shared Task 2020. We find out that the BERT-base model is actually really good, since our models are quite simple but achieve acceptable results. In the future, we want to use better GPU to train bigger models like fine tuned PB large with fine tuned XLMR large, since bigger models seem to have better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "https://webanno.github.io/webanno/ releases/3.6.6/docs/user-guide.html# sect_webannotsv 3 https://vlsp.org.vn/vlsp2020/eval/re",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/vncorenlp/ VnCoreNLP 5 https://github.com/undertheseanlp/ underthesea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "PhoBERT: Pre-trained language models for Vietnamese",
"authors": [
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Com- putational Linguistics: EMNLP 2020.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Our BERT-based model for Relation Extraction.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">No. Relation</td><td>Arguments</td><td>Directionality</td></tr><tr><td>1</td><td>LOCATED</td><td>PER-LOC, ORG-LOC</td><td>Directed</td></tr><tr><td>2</td><td>PART-WHOLE</td><td>LOC-LOC, ORG-ORG, ORG-LOC</td><td>Directed</td></tr><tr><td>3</td><td>PERSONAL-SOCIAL</td><td>PER-PER</td><td>Undirected</td></tr><tr><td>4</td><td>ORGANIZATION-</td><td/><td/></tr></table>",
"text": "AFFILIATION PER-ORG, PER-LOC, ORG-ORG, LOC-ORG Directed"
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Relation types in the VLSP 2020 dataset."
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Each participating team can submit three final</td></tr><tr><td>results on the test set. The official evalua-</td></tr><tr><td>tion measures are micro-averaged F-score. So</td></tr><tr><td>we choose three models that have the highest</td></tr><tr><td>micro-averaged F-score on the public devel-</td></tr><tr><td>opment data. Details of the results (on both</td></tr><tr><td>public development data and private test data)</td></tr><tr><td>are presented in</td></tr></table>",
"text": "The performance of the models (Microaveraged F-score) on the public development data and the private test data."
}
}
}
}