{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:13:58.144207Z" }, "title": "Implementing Bi-LSTM-based deep biaffine neural dependency parser for Vietnamese Universal Dependency Parsing", "authors": [ { "first": "Nguyen", "middle": [], "last": "Thi", "suffix": "", "affiliation": { "laboratory": "", "institution": "FPT University Hanoi", "location": { "country": "Vietnam" } }, "email": "" }, { "first": "Thuy", "middle": [], "last": "Lien", "suffix": "", "affiliation": { "laboratory": "", "institution": "FPT University Hanoi", "location": { "country": "Vietnam" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents our approach to resolve the Vietnamese Universal Dependency Parsing task in VLSP 2020 Evaluation Campaign. On the basis of Deep Biaffine Attention for Neural Dependency Parsing(Dozat and Manning, 2017), we adapted the dependency parser for Vietnamese. Our best model obtained a pretty good performance on the test datasets, achieving 84.08% UAS score and 75.64% LAS on average for the ConLL-U dataset. On the raw text data-set, the results we reached still quite limited, on average 74.47% of UAS and 65.3% of LAS.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper presents our approach to resolve the Vietnamese Universal Dependency Parsing task in VLSP 2020 Evaluation Campaign. On the basis of Deep Biaffine Attention for Neural Dependency Parsing(Dozat and Manning, 2017), we adapted the dependency parser for Vietnamese. Our best model obtained a pretty good performance on the test datasets, achieving 84.08% UAS score and 75.64% LAS on average for the ConLL-U dataset. On the raw text data-set, the results we reached still quite limited, on average 74.47% of UAS and 65.3% of LAS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dependency grammars is a family of grammar formalisms that are quite important in contemporary speech and language processing systems (Daniel Jurafsky, 2019). The dependency parsing task is to identify pairs of a dependent token and a head token that have dependency relation and their dependency relation labels in a given sentence. For decades, researchers have applied dependency parsing in many tasks of natural language processing such as information extraction, coreference resolution, question-answering, semantic parsing, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Universal dependency parsing shared-task was proposed in VLSP 2020 evaluation campaign to promote the development of dependency parsers for Vietnamese(HA My Linh, 2020). The sharedtask published a training corpus of approximately 10,000 dependency-annotated sentences. There are two parts of testing, the first one requires the participant to parse from the input as raw texts where no linguistic information is available. And the second, participant systems will have to parse dependencies information from linguistics annotated sentences. On the CoNLL-U formated test dataset, with the best model, we reached 84.08% UAS score and 75.64% LAS score (averaged on seven test sets).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With the raw text dataset, we obtained 74.47% UAS score and 65.30% LAS score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dependency parsing consists of transition-based, graph-based, and grammar-based parser (Nivre and K\u00fcbler, 2009) . A graph-based algorithm finds the highest scoring parse tree from all possible outputs of an input sentence, scoring each complete tree, while a transition-based algorithm builds a parse by a sequence of actions and scoring each action individually (Zhang and Clark, 2008) .", "cite_spans": [ { "start": 87, "end": 111, "text": "(Nivre and K\u00fcbler, 2009)", "ref_id": "BIBREF8" }, { "start": 363, "end": 386, "text": "(Zhang and Clark, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "In 2016, Kiperwasser & Goldberg presented a scheme for dependency parsing which is based on bidirectional-LSTMs. The BiLSTM is trained jointly with the parser objective (Kiperwasser and Goldberg, 2016) . The effectiveness was demonstrated in two ways by integrating it into a greedy transition-based parser and a globally optimized first-order graph-based parser. In both cases, this approach yields extremely competitive parsing accuracies.", "cite_spans": [ { "start": 169, "end": 201, "text": "(Kiperwasser and Goldberg, 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "In 2017, Dozat & Manning build off recent work from Kiperwasser & Goldberg, they use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels (Dozat and Manning, 2017) . Their parser gained state of the art or near state of the art performance on standard treebanks for six different languages.", "cite_spans": [ { "start": 229, "end": 254, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Training data includes 6 files, including 8150 sentences. In which, the number of different UPOS labels assigned is 30 and the number of XPOS labels is 56, these labels are unevenly distributed across the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "3.1" }, { "text": "Realizing that the appearance of some labels with a low sample count may negatively interfere with the results, we converted the group POS tag to accordingly non-group label, such as 'ADV:G' to 'ADV'. Simultaneously, we merge the labels with the same meanings but the different writing styles, such as Adv and ADV. The histogram of UPOS tag labels and XPOS tag labels after handling are shown in Figure 1 ", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 404, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Data preprocessing", "sec_num": "3.1" }, { "text": "Tokenization and Sentence Splitting The first step of processing is tokenizing the raw text sentences. We used the VNCoreNLP toolkit to deal with this stage. In Vietnamese, lemmas are the same as the word forms. POS Tagging To predict POS, we build a BERTbased (Devlin et al., 2019) classifier using bertbase-multilingual-cased pretrained-model available in HuggingFace (Wolf et al., 2020) . The bert-basemultilingual-cased includes 12-layer, 768-hidden, 12-heads, 179M parameters, trained on cased text in the top 104 languages with the largest Wikipedia. This model was fine-tuned on the training data on total of 8 epochs using the hyper-parameters shown in Table 1 . Dependency Parsing We implemented a BiLSTMbased deep biaffine neural dependency parser (Dozat and Manning, 2017) .", "cite_spans": [ { "start": 261, "end": 282, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF1" }, { "start": 370, "end": 389, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF10" }, { "start": 758, "end": 783, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 661, "end": 668, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Proposed System", "sec_num": "3.2" }, { "text": "Value lr 2e-5 eps 1e-8 Optimizer AdamW We used Adam optimizer (Kingma and Ba, 2014) to optimize the network with the learning rate of 0.003 and fastText for word representations (Joulin et al., 2016) . With fastText pre-trained word vectors, each word vector has 300 dimensions.", "cite_spans": [ { "start": 62, "end": 83, "text": "(Kingma and Ba, 2014)", "ref_id": "BIBREF6" }, { "start": 178, "end": 199, "text": "(Joulin et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Hyper-parameters", "sec_num": null }, { "text": "We set the max-steps to 50,000. However, after 3,000 steps without improvement in the validation accuracy, the training process is terminated instead of running through the whole 50,000 steps. After every 100 steps, a model checkpoint will be saved if there is an increase in validation accuracy. Table 2 summarises the hyper-parameters of the dependency parser we used in the parser.", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 304, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Hyper-parameters", "sec_num": null }, { "text": "In our experiments, we built two different models for dependency parsing, the first model uses both UPOS and XPOS information as training and predict data and the second model only uses UPOS information during the entire process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-parameters", "sec_num": null }, { "text": "The VLSP 2020 workshop provides two dependency parsing test datasets. The first one includes data files in raw text format and the other contains data files in which the sentences have been tokenized and stored in the CoNLL-U format. UAS. One of the reasons that can be mentioned is that the subject of vn8 is somewhat different from the other data sets. Table 4 presents the evaluation results on the CoNLL-U datasets. The model using both UPOS and XPOS information for training gives better results, 84.08% of UAS and 75.64% of LAS on average of seven datasets. This model works best on the vn7 dataset, reaching 85.56% of UAS and 78.89% of LAS. However, it performs worse on the vn1 set, obtains only 74.34% of UAS and 63.37% of LAS. The second model which uses only UPOS and tokens as input on the training process achieves a bit lower performance, with 81.58% averaged UAS score and 73.32% averaged LAS score. The result obtained when adding xpos feature are higher than using only upos feature. It proves that xpos feature has a relatively vital meaning in universal dependency parsing.", "cite_spans": [], "ref_spans": [ { "start": 355, "end": 362, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments & Results", "sec_num": "4" }, { "text": "Experimental results indicate that the results obtained on raw text dataset is substantially worse than those obtained on data in CoNLL-U format. UAS decreased 9,61% and LAS reduced even more, up to 10.34% on average. A plausible explanation is that the raw data processing is not done effectively enough. On the other hand, the results that we achieved are relatively low compared to the evaluation on English data (Wilie et al., 2020) . However, it implies that there will probably still be room for improvement.", "cite_spans": [ { "start": 416, "end": 436, "text": "(Wilie et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments & Results", "sec_num": "4" }, { "text": "In this paper, we present our experiments for the Vietnamese universal dependency parsing task at VLSP 2020 Evaluation Campaign. For raw text processing, we combine several toolkits and models. At the first step, we choose the VNCoreNLP toolkit as a tokenizer. Then a BERT classifier is used to detect the universal part-of-speech tags and Vietnamese part-of-speech tags. At the end, a Bi-LSTM-based deep biaffine neural dependency parser is implemented to produce dependency parsing results. We have obtained promising results on the test dataset, although the results are still lower than results on English datasets. It indicates that our approach probably still has space for growth. Our experiment includes separate modules, which are not inextricably linked. In the future works, we plan to continue doing experiments and improving the dependency parsing model. Next, we plan to build a comprehensive and unified pipeline system which processes raw text and generates dependencies information. In addition, we will also analyze more carefully the pre-processing and processing stages to give a convincing explanation for the difference between the results on the CoNLL-U formatted dataset and raw-text dataset, as well as the difference between files in these datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Chapter 15 Dependency Parsing", "authors": [ { "first": "H. Martin Daniel", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2019, "venue": "Speech and Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James H. Martin Daniel Jurafsky. 2019. Chapter 15 Dependency Parsing. In Speech and Language Pro- cessing.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Deep biaffine attention for neural dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Vlsp 2020 shared task: Universal dependency parsing for vietnamese", "authors": [ { "first": "Luong Nguyen Thi Luong Phan Thi Hue Le", "middle": [], "last": "Vu Xuan", "suffix": "" }, { "first": "Ha My", "middle": [], "last": "Van Cuong", "suffix": "" }, { "first": "", "middle": [], "last": "Linh", "suffix": "" }, { "first": "Minh", "middle": [], "last": "Nguyen Thi", "suffix": "" }, { "first": "", "middle": [], "last": "Huyen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "VU Xuan Luong NGUYEN Thi Luong PHAN Thi Hue LE Van Cuong HA My Linh, NGUYEN Thi Minh Huyen. 2020. Vlsp 2020 shared task: Uni- versal dependency parsing for vietnamese. In Pro- ceedings of The seventh international workshop on", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Vietnamese Language and Speech Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vietnamese Language and Speech Processing (VLSP 2020), Hanoi, Vietnam.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Matthijs Douze, H\u00e9rve J\u00e9gou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H\u00e9rve J\u00e9gou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations", "authors": [ { "first": "Eliyahu", "middle": [], "last": "Kiperwasser", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "313--327", "other_ids": { "DOI": [ "10.1162/tacl_a_00101" ] }, "num": null, "urls": [], "raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Dependency parsing. Synthesis Lectures on Human Language Technologies", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" } ], "year": 2009, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.2200/S00169ED1V01Y200901HLT002" ] }, "num": null, "urls": [], "raw_text": "Joakim Nivre and Sandra K\u00fcbler. 2009. Dependency parsing. Synthesis Lectures on Human Language Technologies, 2.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Syafri Bahar, and Ayu Purwarianti. 2020. Indonlu: Benchmark and resources for evaluating indonesian natural language understanding", "authors": [ { "first": "Bryan", "middle": [], "last": "Wilie", "suffix": "" }, { "first": "Karissa", "middle": [], "last": "Vincentio", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Xiaohong", "middle": [], "last": "Cahyawijaya", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sidik", "middle": [], "last": "Zhi Yuan Lim", "suffix": "" }, { "first": "Rahmad", "middle": [], "last": "Soleman", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Mahendra", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Wilie, Karissa Vincentio, Genta Indra Winata, Samuel Cahyawijaya, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, and Ayu Purwarianti. 2020. Indonlu: Benchmark and resources for evaluating indonesian natural language understanding.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "562--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, pages 562- 571, Honolulu, Hawaii. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "andFigure 2." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Histogram of processed Upos Figure 2: Histogram of processed Xpos" }, "TABREF0": { "text": "Hyper-parameters of the BERT classifier for pos.", "html": null, "content": "
Hyper-parametersValue
Embedding size100
Word embeddingfastText
lr3e-3
OptimizerAdam
LSTM size400
Deep biaffine size400
LSTM dropout0.5
LSTM depth3
", "num": null, "type_str": "table" }, "TABREF1": { "text": "", "html": null, "content": "", "num": null, "type_str": "table" }, "TABREF2": { "text": "shows the evaluation results on the raw text dataset. Our system achieves 65.30% of LAS and 74.47% of UAS on average. The best result is obtained on the vn3 set which was crawled from VnExpress, with 69.29% of LAS and 77.09% of UAS. In contrast, the result recorded on the vn8 set is the lowest, just 59.35% of LAS and 69.61% of ) 74.55 71.7 77.09 70.56 69.61 76.41 71.62 74.47 LAS(%) 65.34 58.91 69.29 65.56 59.35 68.22 63.69 65.30", "html": null, "content": "
ModelVTBvn1vn3vn7vn8vn10 vn14 Avg. Score
FirstUAS(%
Model
", "num": null, "type_str": "table" }, "TABREF3": { "text": "Results on the raw text dataset.", "html": null, "content": "
ModelVTBvn1vn3vn7vn8vn10 vn14 Avg. Score
FirstUAS(%) 84.41 74.34 84.42 85.56 82.88 83.55 82.7984.08
ModelLAS(%) 75.94 63.37 76.48 78.89 74.84 75.48 76.3175.64
SecondUAS(%) 83.20 68.32 76.60 71.11 70.56 73.13 75.5681.58
ModelLAS(%) 75.14 55.95 68.48 61.11 61.09 63.29 68.0873.32
", "num": null, "type_str": "table" }, "TABREF4": { "text": "Results on the CoNLL-U formated dataset.", "html": null, "content": "", "num": null, "type_str": "table" } } } }