ACL-OCL / Base_JSON /prefixV /json /vlsp /2020.vlsp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:01.293736Z"
},
"title": "An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020",
"authors": [
{
"first": "Pham",
"middle": [],
"last": "Quang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aimesoft JSC Hanoi",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Nhat",
"middle": [],
"last": "Minh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aimesoft JSC Hanoi",
"location": {
"country": "Vietnam"
}
},
"email": "minhpham@aimesoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present an empirical study of using pre-trained BERT models for the relation extraction task at the VLSP 2020 Evaluation Campaign. We applied two state-of-theart BERT-based models: R-BERT and BERT model with entity starts. For each model, we compared two pre-trained BERT models: FPTAI/vibert and NlpHUST/vibert4news. We found that NlpHUST/vibert4news model significantly outperforms FPTAI/vibert for the Vietnamese relation extraction task. Finally, we proposed an ensemble model that combines R-BERT and BERT with entity starts. Our proposed ensemble model slightly improved against two single models on the development data and the test data provided by the task organizers.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present an empirical study of using pre-trained BERT models for the relation extraction task at the VLSP 2020 Evaluation Campaign. We applied two state-of-theart BERT-based models: R-BERT and BERT model with entity starts. For each model, we compared two pre-trained BERT models: FPTAI/vibert and NlpHUST/vibert4news. We found that NlpHUST/vibert4news model significantly outperforms FPTAI/vibert for the Vietnamese relation extraction task. Finally, we proposed an ensemble model that combines R-BERT and BERT with entity starts. Our proposed ensemble model slightly improved against two single models on the development data and the test data provided by the task organizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The relation extraction task is to extract entity mention pairs from a sentence and determine relation types between them. Relation extraction systems can be applied in question answering (Xu et al., 2016) , detecting contradiction (Pham et al., 2013) , and extracting gene-disease relationships (Chun et al., 2006) , protein-protein interaction (Huang et al., 2004) from biomedical texts.",
"cite_spans": [
{
"start": 188,
"end": 205,
"text": "(Xu et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 232,
"end": 251,
"text": "(Pham et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 296,
"end": 315,
"text": "(Chun et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 346,
"end": 366,
"text": "(Huang et al., 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In VLSP 2020, the relation extraction task is organized to assess and advance relation extraction work for the Vietnamese language. In this paper, we present an empirical study of BERT-based models for the relation extraction task in VLSP 2020. We applied two state-of-the-art BERT-based models for relation extraction: R-BERT (Wu and He, 2019) and BERT with entity starts (Soares et al., 2019) . Two models use entity markers to capture location information of entity mentions. For each model, we investigated the effect of choosing pre-train BERT models in the task, by comparing two Vietnamese pre-trained BERT models: NlpHUST/vibert4news and FPTAI/vibert (Bui et al., 2020) . In our understanding, our paper is the first work that provides the comparison of pre-trained BERT models for Vietnamese relation extraction.",
"cite_spans": [
{
"start": 327,
"end": 344,
"text": "(Wu and He, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 373,
"end": 394,
"text": "(Soares et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 622,
"end": 677,
"text": "NlpHUST/vibert4news and FPTAI/vibert (Bui et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follows. In Section 2, we present two existing BERTbased models for relation classification, which we investigated in our work. In Section 3, we describe how we prepared datasets for the two BERT-based models and our proposed ensemble model. In Section 4, we give detailed settings and experimental results. Section 5 gives discussions and findings. Finally, in Section 6, we present conclusions and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we briefly describe BERT model (Devlin et al., 2019) , problem formalization, and two existing BERT-based models for relation classification, which we investigated in this paper.",
"cite_spans": [
{
"start": 58,
"end": 79,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-based Models for Relation Classification",
"sec_num": "2"
},
{
"text": "The pre-trained BERT model (Devlin et al., 2019) is a masked language model that is built from multiple layers of bidirectional Transformer encoders (Vaswani et al., 2017) . We can fine-tune pre-trained BERT models to obtain the state-ofthe-art results on many NLP tasks such as text classification, named-entity recognition, question answering, natural language inference. Currently, pre-trained BERT models are available for many languages. For Vietnamese, in our understanding, there are three available pre-trained BERT models: PhoBERT (Nguyen and Nguyen, 2020), FPTAI/vibert (Bui et al., 2020), and Nl-pHUST/vibert4news 1 . Those models are differ-1 vibert4news is available on https: //huggingface.co/NlpHUST/ vibert4news-base-cased ent in pre-training data, selected tokenization, and training settings. In this paper, we investigated two pre-trained BERT models including FPTAI/vibert and NlpHUST/vibert4news for the relation extraction task. Investigation of PhoBERT for the task is left for future work.",
"cite_spans": [
{
"start": 27,
"end": 48,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 149,
"end": 171,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained BERT Models",
"sec_num": "2.1"
},
{
"text": "In this paper, we focus on the relation classification task in the supervised setting. Training data is a sequence of examples. Each sample is a tuple r = (x, s 1 , s 2 , y). We define x = [x 0 ...x n ] as a sequence of tokens, where x 0 = [CLS] is a special start marker. Let s 1 = (i, j) and s 2 = (k, l) are pairs of integers such that 0 < i \u2264 j \u2264 n, 0 < k \u2264 l \u2264 n. Indexes of s 1 and s 2 are start and end indexes of two entity mentions in x, respectively. y denotes the relation label of the two entity mentions in the sequence x. We use a special label OTHER for entity mentions which have no relation between them. Our task is to train a classification model from the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "2.2"
},
{
"text": "In R-BERT (Wu and He, 2019) , for a sequence x and two target entities e 1 and e 2 which specified by indexes of s 1 and s 2 , to make the BERT module capture the location information of the two entities, a special token '$' is added at both the beginning and end of the first entity, and a special token '#' is added at both the beginning and end of the second entity.",
"cite_spans": [
{
"start": 10,
"end": 27,
"text": "(Wu and He, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R-BERT",
"sec_num": "2.3"
},
{
"text": "[CLS] token is also added to the beginning of the sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R-BERT",
"sec_num": "2.3"
},
{
"text": "For example, after inserting special tokens, a sequence with two target entities \"Phi S\u01a1n\" and \"SLNA\" becomes to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R-BERT",
"sec_num": "2.3"
},
{
"text": "\"[CLS] C\u1ea7u th\u1ee7 $ Phi S\u01a1n $ \u0111\u00e3 ghi b\u00e0n cho # SLNA # v\u00e0o ph\u00fat th\u1ee9 80 c\u1ee7a tr\u1eadn \u0111\u1ea5u .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R-BERT",
"sec_num": "2.3"
},
{
"text": "The sequence x with entity markers, is put to a BERT model to get hidden states of tokens in the sequence. Then, we calculate averages of hidden states of tokens within the two target entities and put them through a tanh activation function and a fully connected layer to make vector representations of the two entities. Let H 0 , H 1 , H 2 be hidden states at [CLS] and vector representations of e 1 and e2. We concatenate three hidden states and add a softmax layer for relation classification. R-BERT obtained 89.25% of MACRO F1 on the SemEval-2010 Task 8 dataset (Hendrickx et al., 2010) .",
"cite_spans": [
{
"start": 361,
"end": 366,
"text": "[CLS]",
"ref_id": null
},
{
"start": 567,
"end": 591,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R-BERT",
"sec_num": "2.3"
},
{
"text": "We applied the BERT model with entity starts (hereinafter, referred to as BERT-ES) presented in (Soares et al., 2019) for Vietnamese relation classification. In the model, similar to R-BERT, special tokens are added at the beginning and end of two target entities. In experiments of BERT-ES for Vietnamese relation classification, different from (Soares et al., 2019), we used entity markers '$' and '#' instead of markers '[E1]', '[/E1]', '[E1]', and '[/E2]'. We did not add [SEP] at the end of a sequence. In BERT-ES, hidden states at the start positions of two target entities are concatenated and put through a softmax layer for final classification. On SemEval-2010 Task 8 dataset, BERT-ES obtained 89.2% of MACRO F1.",
"cite_spans": [
{
"start": 96,
"end": 117,
"text": "(Soares et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 476,
"end": 481,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT with Entity Start",
"sec_num": "2.4"
},
{
"text": "In this work, we applied R-BERT and BERT-ES as we presented in Section 2 for Vietnamese relation extraction, and proposed an ensemble model of R-BERT and BERT-ES. In the following sections, we present how we prepared data for training BERTbased models and how we combined two single models: R-BERT and BERT-ES.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "3"
},
{
"text": "Relation extraction data provided by VLSP 2020 organizers in WebAnno TSV 3.2 format (Eckart de Castilho et al., 2016). In the data, sentences are not segmented and tokens are tokenized by white spaces. Punctuations are still attached in tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "3.1"
},
{
"text": "According to the task guideline, we consider only intra-sentential relations, so sentence segmentation is required in data preprocessing. We used VnCoreNLP toolkit (Vu et al., 2018) for both sentence segmentation and tokenization. For the sake of simplicity, we just used syllables as tokens of sentences. VnCoreNLP sometimes made mistakes in sentence segmentation, and as the result, we missed some relations for those cases.",
"cite_spans": [
{
"start": 164,
"end": 181,
"text": "(Vu et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "3.1"
},
{
"text": "From each sentence, for training and evaluation, we made relation samples which are tupes r = (x, s 1 , s 2 , y) as described in Section 2. Since in the data, named entities with their labels are provided, a simple way of making relation samples is generating all possible entity mention pairs from entity mentions of a sentence. We used the label OTHER for entity mention pairs that lack relation between them. All entity mentions pairs that are not included in gold-standard data are used as OTHER samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sample Generation",
"sec_num": "3.2"
},
{
"text": "In the annotation guideline provided by VLSP 2020 organizers, there are constraints about types of two target entities of relation types as shown in Table 1 . Thus, we consider only entity mention pairs whose types satisfy those constraints. In training data, sometimes types of two target entities do not follow the annotation guideline. We accepted those entity pairs in making relation samples from provided train and development datasets. However, in processing test data for making submitted results, we consider only entity pairs whose types follow the annotation guideline.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Relation Sample Generation",
"sec_num": "3.2"
},
{
"text": "Since the relation PERSONAL-SOCIAL is undirected, for this type, if we consider both pairs (e 1 , e 2 ) and (e 2 , e 1 ) in which e 1 and e 2 are PERSON entities, it may introduce redundancy. Thus, we added an extra constraint for PER-PER pairs that e 1 must come before e 2 in a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sample Generation",
"sec_num": "3.2"
},
{
"text": "In the training data, we found a very long sentence with more than 200 relations. We omitted that sentence from the training data because that sentence may lead to too many OTHER relation samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sample Generation",
"sec_num": "3.2"
},
{
"text": "In our work, we tried to combine R-BERT and BERT-ES to make an ensemble model. We did that by calculating weighted averages of probabilities returned by R-BERT and BERT-ES. Since in our experiments, BERT-ES performed slightly better than R-BERT on the development set, we used weights 0.4 and 0.6 for R-BERT and BERT-ES, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Ensemble Model",
"sec_num": "3.3"
},
{
"text": "We conducted experiments to compare three BERT-based models on Vietnamese relation extraction data: R-BERT, BERT-ES, and the proposed ensemble model. We also investigated the effects of two Vietnamese pre-trained BERT models on the performance of models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "The provided training dataset contains 506 documents, and the development dataset contains 250 documents. After data preprocessing and relation sample generation, we obtained relations with label distributions shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "In development, we trained models on the training data and evaluated models on the development data. However, to generate results on the provided test dataset, we trained BERT-based models on the dataset obtained by combining the provided training dataset and the development dataset. Table 3 shows hyper-parameters we used for training models. We trained all models on a single 2080 Ti GPU.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "We used MICRO F1 and MACRO F1 of four relation labels which do not include the label OTHER as evaluation measures. Table 4 shows the evaluation results obtained on the development dataset. We can see that using NlpHUST/vibert4news significantly outperformed FPTAI/vibert in both MICRO F1 and MACRO F1 scores. BERT-ES performed slightly better than R-BERT. The proposed ensemble model is slightly improved against R-BERT and BERT-ES in terms of MICRO F1 score. Table 5 shows the evaluation results obtained on the test dataset. We used NlpHUST/vibert4news for generating test results. Table 5 confirmed the effectiveness of our proposed ensemble model. The ensemble model obtained the best MACRO F1 and the best MICRO F1 score on the test data among the three models.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 460,
"end": 467,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 584,
"end": 591,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "We looked at details of precision, recall, and F1 scores for each relation type on the development data. Table 6 shows results of the ensemble model with vibert4news pre-trained model. PERSONAL-SOCIAL turned out to be a difficult label. The proposed ensemble obtained a low Recall, and F1 score for that label. The reason might be that the relations of PERSONAL-SOCIAL are few in the training data while the patterns of PERSONAL-SOCIAL relations are wider than other relation types.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": "4.4"
},
{
"text": "In experiments, we compared the effects of two pretrained BERT models: NlpHUST/vibert4news and FPTAI/vibert on relation extraction. The two pre-trained models have the same BERT architecture (BERT base model) but are different in chosen tokenizers, vocabulary size, pre-training data, and training procedure. Table 7 shows a comparison of the two models.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "FPTAI/vibert was trained on 10GB of texts collected from online newspapers while Nl-pHUST/vibert4news was trained on 20GB of texts in the news domain. FPTAI/vibert used subword tokenization, and vocabulay of FPTAI/vibert was modified from mBERT while tokenization of vib-ert4news is based on syllables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We come up with some reasons why using Nl-pHUST/vibert4news significantly outperformed FPTAI/vibert for Vietnamese relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "\u2022 Pre-training data used to trained vibert4news is much larger than FPTAI/vibert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "\u2022 Tokenization used in NlpHUST/vibert4news is based on syllables while FPTAI/vibert used subwords and modified the original vocabulary of mBERT. We hypothesize that syllables which are basic units in Vietnamese are more appropriate than subwords for Vietnamese NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Due to the time limit, we did not investigate PhoBERT (Nguyen and Nguyen, 2020) which used word-level corpus to train the model. As future work, we plan to compare vibert4news that uses syllable-based tokenization with PhoBERT that uses word-level/subword tokenization for Vietnamese relation extraction. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We have presented an empirical study of BERTbased models for relation extraction task at VLSP 2020 Evaluation Campaign. Experimental results show that the BERT-ES model which uses entity markers and entity starts obtained better results than the R-BERT model, and choosing an appropriate pre-trained BERT model is important for the task. We showed that pre-trained model Nl-pHUST/vibert4news outperformed FPTAI/vibert for Vietnamese relation extraction task. In future work, we plan to investigate PhoBERT (Nguyen and Nguyen, 2020) for Vietnamese relation extraction to understand the effect of using word segmentation to the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improving sequence tagging for vietnamese text using transformer-based neural models",
"authors": [
{
"first": "Viet",
"middle": [],
"last": "The",
"suffix": ""
},
{
"first": "Thi",
"middle": [
"Oanh"
],
"last": "Bui",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.15994"
]
},
"num": null,
"urls": [],
"raw_text": "The Viet Bui, Thi Oanh Tran, and Phuong Le-Hong. 2020. Improving sequence tagging for vietnamese text using transformer-based neural models. arXiv preprint arXiv:2006.15994.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A web-based tool for the integrated annotation of semantic and syntactic structures",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "\u00c9va",
"middle": [],
"last": "M\u00fajdricza-Maydt",
"suffix": ""
},
{
"first": "Silvana",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH)",
"volume": "",
"issue": "",
"pages": "76--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Eckart de Castilho, \u00c9va M\u00fajdricza-Maydt, Seid Muhie Yimam, Silvana Hartmann, Iryna Gurevych, Anette Frank, and Chris Biemann. 2016. A web-based tool for the integrated annotation of se- mantic and syntactic structures. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH), pages 76-84, Osaka, Japan. The COLING 2016 Organiz- ing Committee.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extraction of gene-disease relations from medline using domain dictionaries and machine learning",
"authors": [
{
"first": "Hong-Woo",
"middle": [],
"last": "Chun",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Rie",
"middle": [],
"last": "Shiba",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Teruyoshi",
"middle": [],
"last": "Hishiki",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2006,
"venue": "Biocomputing",
"volume": "",
"issue": "",
"pages": "4--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong-Woo Chun, Yoshimasa Tsuruoka, Jin-Dong Kim, Rie Shiba, Naoki Nagata, Teruyoshi Hishiki, and Jun'ichi Tsujii. 2006. Extraction of gene-disease re- lations from medline using domain dictionaries and machine learning. In Biocomputing 2006, pages 4- 15. World Scientific.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "33--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid \u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evalua- tion, pages 33-38, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discovering patterns to extract protein-protein interactions from full texts",
"authors": [
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Kunbin",
"middle": [],
"last": "Donald G Payan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2004,
"venue": "Bioinformatics",
"volume": "20",
"issue": "18",
"pages": "3604--3612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minlie Huang, Xiaoyan Zhu, Yu Hao, Donald G Payan, Kunbin Qu, and Ming Li. 2004. Discovering pat- terns to extract protein-protein interactions from full texts. Bioinformatics, 20(18):3604-3612.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "PhoBERT: Pre-trained language models for Vietnamese",
"authors": [
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1037--1042",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 1037- 1042.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using shallow semantic parsing and relation extraction for finding contradiction in text",
"authors": [
{
"first": "Minh Quang Nhat",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Minh",
"middle": [
"Le"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Akira",
"middle": [],
"last": "Shimazu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1017--1021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh Quang Nhat Pham, Minh Le Nguyen, and Akira Shimazu. 2013. Using shallow semantic parsing and relation extraction for finding contradiction in text. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1017-1021.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Matching the blanks: Distributional similarity for relation learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Livio Baldini",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2895--2905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895-2905.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30:5998-6008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "VnCoreNLP: A Vietnamese natural language processing toolkit",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "56--60",
"other_ids": {
"DOI": [
"10.18653/v1/N18-5012"
]
},
"num": null,
"urls": [],
"raw_text": "Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark Dras, and Mark Johnson. 2018. VnCoreNLP: A Vietnamese natural language processing toolkit. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Demonstrations, pages 56-60, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Enriching pretrained language model with entity information for relation classification",
"authors": [
{
"first": "Shanchan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2361--2364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2361-2364. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Question answering on freebase via relation extraction and textual evidence",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2326--2336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2326-2336.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "Relation types permitted arguments and directionality.",
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"text": "Label distribution of relation samples generated from train and dev data.",
"content": "<table><tr><td colspan=\"2\">Hyper-Parameters Value</td></tr><tr><td>Max sequence length</td><td>384</td></tr><tr><td>Training epochs</td><td>10</td></tr><tr><td>Train batch size</td><td>16</td></tr><tr><td>Learning rate</td><td>2e-5</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>R-BERT</td><td>NlpHUST/vibert4news</td><td>0.6392</td><td>0.7092</td></tr><tr><td>R-BERT</td><td>FPTAI/vibert</td><td>0.596</td><td>0.6736</td></tr><tr><td>BERT-ES</td><td>NlpHUST/vibert4news</td><td>0.6439</td><td>0.7101</td></tr><tr><td>BERT-ES</td><td>FPTAI/vibert</td><td>0.5976</td><td>0.6822</td></tr><tr><td colspan=\"2\">Ensemble Model NlpHUST/vibert4news</td><td>0.6412</td><td>0.7108</td></tr><tr><td colspan=\"2\">Ensemble Model FPTAI/vibert</td><td>0.6029</td><td>0.6851</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"text": "Evaluation results on dev dataset.",
"content": "<table><tr><td>Model</td><td colspan=\"2\">MACRO F1 MICRO F1</td></tr><tr><td>R-BERT</td><td>0.6294</td><td>0.6645</td></tr><tr><td>BERT-ES</td><td>0.6276</td><td>0.6696</td></tr><tr><td>Ensemble Model</td><td>0.6342</td><td>0.6756</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"text": "Evaluation results on test dataset.",
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"text": "Precision, Recall, F1 for each relation type on the dev dataset.",
"content": "<table><tr><td/><td colspan=\"2\">FPTAI/vibert vibert4news</td></tr><tr><td>Data size</td><td>10GB</td><td>20GB</td></tr><tr><td>Data domain</td><td>News</td><td>News</td></tr><tr><td>Tokenization</td><td>Subword</td><td>Syllable</td></tr><tr><td>Vocab size</td><td>38168</td><td>62000</td></tr></table>"
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"text": "Comparison of NlpHUST/vibert4news and FPTAI/vibert.",
"content": "<table/>"
}
}
}
}