|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:14:03.285996Z" |
|
}, |
|
"title": "A Joint Deep Contextualized Word Representation for Deep Biaffine Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Xuan-Dung", |
|
"middle": [], |
|
"last": "Doan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Viettel Group Hanoi", |
|
"location": { |
|
"country": "Vietnam" |
|
} |
|
}, |
|
"email": "dungdx4@viettel.com.vn" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose a joint deep contextualized word representation for dependency parsing. Our joint representation consists of five components: word representations from ELMo (Pe", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose a joint deep contextualized word representation for dependency parsing. Our joint representation consists of five components: word representations from ELMo (Pe", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Dependency parsing is the task of automatically identifying binary grammatical relations between tokens in a sentence. There are two common approaches to dependency parsing: transitionbased (Nivre, 2003; McDonald and Pereira, 2006) , and graph-based (Eisner, 1996; McDonald et al., 2005a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 203, |
|
"text": "(Nivre, 2003;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 231, |
|
"text": "McDonald and Pereira, 2006)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 264, |
|
"text": "(Eisner, 1996;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 288, |
|
"text": "McDonald et al., 2005a)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, there has been a surge in the use of deep learning approaches to dependency parsing (Chen and Manning, 2014; Dyer et al., 2015; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016; Ma et al., 2018; Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez, 2019; Zhang et al., 2020) , which help alleviate the need for hand-crafted features, take advantage of the vast amount of raw data through word embeddings, and achieve stateof-the-art results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 118, |
|
"text": "(Chen and Manning, 2014;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 137, |
|
"text": "Dyer et al., 2015;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 169, |
|
"text": "Kiperwasser and Goldberg, 2016;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 194, |
|
"text": "Dozat and Manning, 2016;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 211, |
|
"text": "Ma et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 257, |
|
"text": "Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez, 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 277, |
|
"text": "Zhang et al., 2020)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Contextualized word representations, such as ELMo and BERT, have shown to be extremely helpful in a variety of NLP tasks. The contextualized model is used as a feature extractor, which is able to encode semantic and syntactic information of the input into a vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we further improve dependency parsing performance by making good use of external contextualized word representations. Che et al. (2018) incorporated ELMo into both dependency parser and ensemble parser training with different initialization. Their system achieved the best result in CoNLL 2018 shared task. Li et al. (2019) captured contextual information by combining the power of both BiLSTM and selfattention via model ensembles. The results led to a new state-of-the-art parsing performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 149, |
|
"text": "Che et al. (2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 337, |
|
"text": "Li et al. (2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Nguyen and Nguyen (2020) replaced the pretrained word embedding of each word in an input sentence by corresponding contextualized embedding computed for the first subword token of the word. They achieve the state-of-the-art performance on VnDT dependency treebank v1.1 (Nguyen et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 290, |
|
"text": "(Nguyen et al., 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our model, an input sentence of n words w = w 1 , w 2 , ..., w n is fed to each of the component networks to learn separate token embeddings. We describe the learning process below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Graph-based Dependency Parsing follows the common structured prediction paradigm (McDonald et al., 2005a; Taskar et al., 2005) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 105, |
|
"text": "(McDonald et al., 2005a;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 126, |
|
"text": "Taskar et al., 2005)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-based Dependency Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "predict(w) = argmax y\u2208Y(w) score global (w, y) (1) score global (w, y) = part\u2208y score local (w, part) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-based Dependency Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given an input sentence w (and the corresponding sequence of the vectors w 1:n ), we look the highestscore parse tree y in the space Y(w) of valid dependency trees over w. In order to make the search tractable, the scoring function is decomposed to the sum of local scores for each part independently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-based Dependency Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The input layer maps each input word w i into a dense vector representation x i . We use word2vec (Mikolov et al., 2013) embeddings trained on baomoi dataset (Xuan-Son Vu, 2019) emb word w i , a CNN-encoder character representation (Kim, 2014 ) emb char w i , and POS-tag embedding is created randomize to enrich each word's representation emb tag t i further.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 120, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 242, |
|
"text": "(Kim, 2014", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x i = emb word w i \u2295 emb char w i \u2295 emb tag t i", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(LM ) i as h (LM ) i = BiLST M (LM ) (h (LM ) 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", (\u0175 1 , ...,\u0175 n )) i (4) where\u0175 i is the output of a CNN over characters. ELMo representational power is computed by a linear combination of BiLSTM layers:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "ELM o i = \u03b3 L j=0 s j h (LM ) i,j (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where s j is a softmax-normalized task-specific parameter and \u03b3 is a task-specific scalar. We use the Vietnamese ELMo model released by Che et al. (2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 153, |
|
"text": "Che et al. (2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "BERT introduced an alternative language modeling objective to be used during training of the model. Instead of predicting the next token, the model is expected to guess a masked token. BERT is based on the Transformer architecture (Vaswani et al., 2017) , which carries the benefit of learning potential dependencies between words directly. For use in downstream tasks, BERT extract the Transformer's encoding of each token at the last layer, which effectively produces BERT i .", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 253, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "PhoBERT (Nguyen and Nguyen, 2020) was introduced for the Vietnamese NLP community as a Roberta-based model (Liu et al., 2019) . PhoBERT achieves the state-of-the-art in Vietnamese POStag and Named Entity Recognition. Therefore, we use PhoBERT to produce BERT i .", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 125, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "After getting ELM o i and BERT i , we use them as an additional word embedding. The calculation of x i becomes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "x i = emb word w i \u2295 emb char w i \u2295 emb tag t i \u2295ELM o i \u2295 BERT i (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "The BiLSTM is used to capture the context information of each word. Finally, the encoder outputs a sequence of hidden states s i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "We use the Biaffine attention mechanism described in (Dozat and Manning, 2016) for our dependency parser. The task is posed as a classification problem, where given a dependent word, the goal is to predict the head word (or the incoming arc). Formally, let s i and h t be the BiLSTM output states for the dependent word and a candidate head word respectively, the score for the arc between s i and h t is calculated as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 78, |
|
"text": "(Dozat and Manning, 2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Biaffine Attention Mechanism", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e t i = h T t W s i + U T h t + V T s i + b", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Biaffine Attention Mechanism", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector. Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Biaffine Attention Mechanism", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The parser defines a local cross-entropy loss for each position i. Assuming w j is the gold-standard head of w i , the corresponding loss is loss(s, i) = \u2212log e score (i\u2190j) 0\u2264k\u2264n,k =i e score(i\u2190k) (8)", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 172, |
|
"text": "(i\u2190j)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Loss", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The decoding problem of this parsing model is solved by using the Maximum Spanning Tree (MST) algorithm (McDonald et al., 2005b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 128, |
|
"text": "(McDonald et al., 2005b)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing Decoding", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The VLSP organizers released the datasets in two phases. We split the first dataset into training, development, and test data, according to the 7:1:2 ratio. We then merge the second dataset into the first training data. The final statistics are summarized in Table 1 . Table 2 summarizes the hyper-parameters that we use in our experiments. We implement an addi- tional model that trains on lowercased input data, since the dataset also includes text from social media, which contains many word-form errors. We compare our results with the graph-based Deep Biaffine (BiAF) (Dozat and Manning, 2016) parser. Since the private test set of the VLSP Shared Task contains raw text only, we use VncoreNLP (Vu et al., 2018) to segment and POS-tag the raw data. Parsing performance is measured using UAS metric (Unlabeled Attachment Score) and LAS metric (Labeled Attachment Score) by comparing the gold relations of the test set and relations returned by the system. We use the evaluation script published at CoNLL 2018 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 573, |
|
"end": 598, |
|
"text": "(Dozat and Manning, 2016)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 716, |
|
"text": "(Vu et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 266, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 276, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The results on the test set are shown in Table 3 . Beside providing the private raw data set, VLSP organizers also provide the data in CoNLL-U (Ginter et al., 2017) format. The results on the private CoNLL-U format test set are shown in Table 5 . The final result is calculated by averaging UAS and LAS scores on the raw private data and the private CoNLL-U format data. The official rank is based on average the final UAS and LAS score. The final result of all teams is shown in Table 6 . Our model ranks 1st in LAS and 2nd in UAS. Finally, we rank 2nd on average UAS and LAS, officially.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 48, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 244, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 487, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We present joint ELMO and BERT as features for dependency parsing. In the future, we plan to analyze the effectiveness of our model when ELMO and/or BERT are excluded. We also plan to improve our model by using the self-attention mechanism as a replacement for the BiLSTM-based encoder in our current model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://universaldependencies.org/conll18/conll18 ud eval.py", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation", |
|
"authors": [ |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yijia", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxuan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD pars- ing: Deep contextualized word embeddings, en- semble, and treebank concatenation. CoRR, abs/1807.03121.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A fast and accurate dependency parser using neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--750", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Deep biaffine attention for neural dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency pars- ing. CoRR, abs/1611.01734.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Transitionbased dependency parsing with stack long shortterm memory", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Austin", |
|
"middle": [], |
|
"last": "Matthews", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "334--343", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1033" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334-343, Beijing, China. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Three new probabilistic models for dependency parsing: An exploration", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "The 16th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COL- ING 1996 Volume 1: The 16th International Confer- ence on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Left-to-right dependency parsing with pointer networks", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Gonz\u00e1lez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Carlos G\u00f3mez- Rodr\u00edguez. 2019. Left-to-right dependency parsing with pointer networks. CoRR, abs/1903.08445.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University", |
|
"authors": [ |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juhani", |
|
"middle": [], |
|
"last": "Luotolahti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Filip Ginter, Jan Haji\u010d, Juhani Luotolahti, Milan Straka, and Daniel Zeman. 2017. CoNLL 2017 shared task -automatically annotated raw texts and word embed- dings. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles Uni- versity.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations", |
|
"authors": [ |
|
{ |
|
"first": "Eliyahu", |
|
"middle": [], |
|
"last": "Kiperwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "313--327", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Self-attentive biaffine dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenghua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luo", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5067--5073", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.24963/ijcai.2019/704" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying Li, Zhenghua Li, Min Zhang, Rui Wang, Sheng Li, and Luo Si. 2019. Self-attentive biaffine depen- dency parsing. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intel- ligence, IJCAI-19, pages 5067-5073. International Joint Conferences on Artificial Intelligence Organi- zation.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Vlsp 2020 shared task: Universal dependency parsing for vietnamese", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ha My Linh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh", |
|
"middle": [], |
|
"last": "Nguyen Thi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vu Xuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thi", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Phan Thi Hue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Cuong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The seventh international workshop on Vietnamese Language and Speech Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "HA My Linh, NGUYEN Thi Minh Huyen, VU Xuan Luong, NGUYEN Thi Luong, PHAN Thi Hue, and LE Van Cuong. 2020. Vlsp 2020 shared task: Uni- versal dependency parsing for vietnamese. In Pro- ceedings of The seventh international workshop on Vietnamese Language and Speech Processing (VLSP 2020), Hanoi, Vietnam.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Roberta: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Online large-margin training of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1219840.1219852" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computa- tional Linguistics (ACL'05), pages 91-98, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Online learning of approximate dependency parsing algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "11th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In 11th Conference of the European Chap- ter of the Association for Computational Linguis- tics, Trento, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Non-projective dependency parsing using spanning tree algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kiril", |
|
"middle": [], |
|
"last": "Ribarov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "523--530", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005b. Non-projective dependency pars- ing using spanning tree algorithms. In Proceed- ings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523-530, Vancouver, British Columbia, Canada. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems 26 (NIPS 2013).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "PhoBERT: Pre-trained language models for Vietnamese", |
|
"authors": [ |
|
{ |
|
"first": "Anh", |
|
"middle": [ |
|
"Tuan" |
|
], |
|
"last": "Dat Quoc Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1037--1042", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1037-1042.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "From treebank conversion to automatic dependency parsing for vietnamese", |
|
"authors": [], |
|
"year": 2014, |
|
"venue": "Natural Language Processing and Information Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "196--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Quoc Nguyen, Dai Quoc Nguyen, Son Bao Pham, Phuong-Thai Nguyen, and Minh Le Nguyen. 2014. From treebank conversion to automatic dependency parsing for vietnamese. In Natural Language Pro- cessing and Information Systems, pages 196-207, Cham. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "An efficient algorithm for projective dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Eighth International Conference on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre. 2003. An efficient algorithm for pro- jective dependency parsing. In Proceedings of the Eighth International Conference on Parsing Tech- nologies, pages 149-160, Nancy, France.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. CoRR, abs/1802.05365.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Learning structured prediction models: A large margin approach", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vassil", |
|
"middle": [], |
|
"last": "Chatalbashev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 22nd International Conference on Machine Learning, ICML '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "896--903", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1102351.1102464" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured predic- tion models: A large margin approach. In Pro- ceedings of the 22nd International Conference on Machine Learning, ICML '05, page 896-903, New York, NY, USA. Association for Computing Machin- ery.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "VnCoreNLP: A Vietnamese natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Thanh", |
|
"middle": [], |
|
"last": "Vu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dat Quoc Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dai Quoc Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--60", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-5012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark Dras, and Mark Johnson. 2018. VnCoreNLP: A Vietnamese natural language processing toolkit. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Demonstrations, pages 56-60, New Orleans, Louisiana. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Etnlp: A visual-aided systematic approach to select pre-trained embeddings for a downstream task", |
|
"authors": [], |
|
"year": 2019, |
|
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Son N. Tran Lili Jiang Xuan-Son Vu, Thanh Vu. 2019. Etnlp: A visual-aided systematic approach to se- lect pre-trained embeddings for a downstream task. In: Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Efficient second-order TreeCRF for neural dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenghua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3295--3305", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.302" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Effi- cient second-order TreeCRF for neural dependency parsing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 3295-3305, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Number of sentences</td></tr><tr><td>Train set</td><td>6626</td></tr><tr><td>Develop set</td><td>507</td></tr><tr><td>Test set</td><td>1010</td></tr><tr><td>4.2 Setup</td><td/></tr></table>", |
|
"text": "Statistics of the public dataset", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Layer</td><td colspan=\"2\">Hyper-Parameter Value</td></tr><tr><td/><td>Word</td><td>dimension</td><td>300</td></tr><tr><td>Input</td><td>POS</td><td>dimension</td><td>50</td></tr><tr><td/><td>Char</td><td>dimension</td><td>50</td></tr><tr><td colspan=\"2\">LSTM Encoder</td><td>encoder layer encoder size</td><td>6 500</td></tr><tr><td/><td>MLP</td><td>arc MLP size label MLP size</td><td>512 128</td></tr><tr><td/><td/><td>Dropout</td><td>0.33</td></tr><tr><td/><td>Training</td><td>optimizer learning rate</td><td>Adam 0.001</td></tr><tr><td/><td/><td>batch size</td><td>80</td></tr><tr><td>ELMo</td><td/><td>dimension</td><td>1024</td></tr><tr><td>BERT</td><td/><td>dimension</td><td>768</td></tr></table>", |
|
"text": "Hyper-parameters in our experiments", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>UAS/LAS</td></tr><tr><td>BiAF</td><td>80.83/69.40</td></tr><tr><td>Our model</td><td>82.86/71.16</td></tr><tr><td colspan=\"2\">Our lowercase model 83.02/71.05</td></tr><tr><td colspan=\"2\">The raw private test set after segmentation and</td></tr><tr><td colspan=\"2\">POS tagging by VncoreNLP is the input to our</td></tr><tr><td colspan=\"2\">model. The results on the raw private test set are</td></tr><tr><td>shown in Table 4.</td><td/></tr></table>", |
|
"text": "The results (UAS%/LAS%) on the test set", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">: The results (UAS%/LAS%) on each file of the</td></tr><tr><td>raw private test set</td><td/></tr><tr><td colspan=\"2\">Our model Our lowercase model</td></tr><tr><td>VTB 76.33/67.46</td><td>75.68/66.59</td></tr><tr><td>vn1 74.79/65.38</td><td>72.17/62.61</td></tr><tr><td>vn3 74.22/66.73</td><td>74.95/67.28</td></tr><tr><td>vn7 68.33/61.67</td><td>66.11/61.11</td></tr><tr><td>vn8 74.81/65.71</td><td>74.29/65.97</td></tr><tr><td>vn10 80.64/72.46</td><td>78.45/69.98</td></tr><tr><td>vn14 72.61/62.45</td><td>73.36/63.69</td></tr><tr><td>Total 76.12/67.32</td><td>75.48/66.53</td></tr></table>", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Our model Our lowercase model</td></tr><tr><td>VTB 84.81/76.44</td><td>84.58/76.29</td></tr><tr><td>vn1 78.98/70.94</td><td>77.43/70.17</td></tr><tr><td>vn3 85.89/76.97</td><td>85.46/77.58</td></tr><tr><td>vn7 82.22/75.56</td><td>80.00/73.89</td></tr><tr><td>vn8 82.49/73.93</td><td>81.32/73.8</td></tr><tr><td>vn10 85.46/77.53</td><td>81.20/72.69</td></tr><tr><td>vn14 84.04/75.31</td><td>83.54/76.81</td></tr><tr><td>Total 84.65/76.27</td><td>84.23/76.05</td></tr></table>", |
|
"text": "The results (UAS%/LAS%) on each file of the private CoNLL-U format test set", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">UAS LAS Aver. Rank</td></tr><tr><td colspan=\"2\">Our model 80.39 71.80 76.09</td><td>2</td></tr><tr><td>DP2</td><td>80.89 71.36 76.12</td><td>1</td></tr><tr><td>DP3</td><td>78.58 70.04 74.31</td><td>4</td></tr><tr><td>DP4</td><td>79.28 70.47 74.87</td><td>3</td></tr><tr><td>DP5</td><td>77.28 68.77 73.03</td><td>5</td></tr></table>", |
|
"text": "The final results (UAS%/LAS%/Average%) of all teams", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |