{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:30:30.670791Z" }, "title": "A Little Perturbation Makes a Difference: Treebank Augmentation by Perturbation Improves Transfer Parsing", "authors": [ { "first": "Ayan", "middle": [], "last": "Das", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Kharagpur", "location": { "settlement": "Kharagpur", "region": "WB", "country": "India" } }, "email": "ayan.das@cse.iitkgp.ernet.in" }, { "first": "Sudeshna", "middle": [], "last": "Sarkar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Kharagpur", "location": { "settlement": "Kharagpur", "region": "WB", "country": "India" } }, "email": "sudeshna@cse.iitkgp.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an approach for cross-lingual transfer of dependency parser so that the parser trained on a single source language can more effectively cater to diverse target languages. In this work, we show that the cross-lingual performance of the parsers can be enhanced by over-generating the source language treebank. For this, the source language treebank is augmented with its perturbed version in which controlled perturbation is introduced in the parse trees by stochastically reordering the positions of the dependents with respect to their heads while keeping the structure of the parse trees unchanged. This enables the parser to capture diverse syntactic patterns in addition to those that are found in the source language. The resulting parser is found to more effectively parse target languages with different syntactic structures. With English as the source language, our system shows an average improvement of 6.7% and 7.7% in terms of UAS and LAS over 29 target languages compared to the baseline single source parser trained using unperturbed source language treebank. This also results in significant improvement over the transfer parser proposed by Ahmad et al. (2019) that involves an \"orderfree\" parser algorithm.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "We present an approach for cross-lingual transfer of dependency parser so that the parser trained on a single source language can more effectively cater to diverse target languages. In this work, we show that the cross-lingual performance of the parsers can be enhanced by over-generating the source language treebank. For this, the source language treebank is augmented with its perturbed version in which controlled perturbation is introduced in the parse trees by stochastically reordering the positions of the dependents with respect to their heads while keeping the structure of the parse trees unchanged. This enables the parser to capture diverse syntactic patterns in addition to those that are found in the source language. The resulting parser is found to more effectively parse target languages with different syntactic structures. With English as the source language, our system shows an average improvement of 6.7% and 7.7% in terms of UAS and LAS over 29 target languages compared to the baseline single source parser trained using unperturbed source language treebank. This also results in significant improvement over the transfer parser proposed by Ahmad et al. (2019) that involves an \"orderfree\" parser algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Cross-lingual dependency parsing involves training a dependency parser using a treebank in one language (source language) and applying it to parse sentences in another language (target language). This can be used to develop parsers for languages for which no treebank is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The syntactic similarity between the source and the target languages typically plays an important role in the success of a cross-lingual transfer parser (Zeman and Resnik, 2008; Naseem et al., 2012; S\u00f8gaard, 2011) . A major challenge in transfer parsing is to bridge the difference in the syntax of the source and the target languages. For example, the object usually occurs after the corresponding verb in English while the verb normally occurs at the final position in a clause in Japanese.", "cite_spans": [ { "start": 153, "end": 177, "text": "(Zeman and Resnik, 2008;", "ref_id": "BIBREF30" }, { "start": 178, "end": 198, "text": "Naseem et al., 2012;", "ref_id": "BIBREF15" }, { "start": 199, "end": 213, "text": "S\u00f8gaard, 2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to achieve better performance of the transfer parsers, researchers have worked on the selection of syntactically similar source languages for a given target language (S\u00f8gaard, 2011; Rasooli and Collins, 2017; Wang and Eisner, 2016) . Attempts have also been made towards improving the performance of the transferred parsers for a given source-target language pair by reducing the syntactic gaps between them. This is done by transforming the source language parse trees (Aufrant et al., 2016; Rasooli and Collins, 2019; Eisner, 2016, 2018; Das and Sarkar, 2019) using knowledge of the typological properties of the target language. However, these approaches are target language specific and may not give satisfactory results for multiple languages.", "cite_spans": [ { "start": 175, "end": 190, "text": "(S\u00f8gaard, 2011;", "ref_id": "BIBREF23" }, { "start": 191, "end": 217, "text": "Rasooli and Collins, 2017;", "ref_id": "BIBREF16" }, { "start": 218, "end": 240, "text": "Wang and Eisner, 2016)", "ref_id": "BIBREF27" }, { "start": 479, "end": 501, "text": "(Aufrant et al., 2016;", "ref_id": "BIBREF2" }, { "start": 502, "end": 528, "text": "Rasooli and Collins, 2019;", "ref_id": "BIBREF17" }, { "start": 529, "end": 548, "text": "Eisner, 2016, 2018;", "ref_id": null }, { "start": 549, "end": 570, "text": "Das and Sarkar, 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent work by Ahmad et al. (2019) proposed an \"order-free\" parser model that comprises of a transformer-based encoder and a graph-based decoder. They show that the self-attention mechanism of the transformer with direction independent position encoding used in their model gives rise to improved performance for transfer between distant pair of languages compared to a standard parser model that uses an RNN based encoder and stack pointer-based decoder.", "cite_spans": [ { "start": 15, "end": 34, "text": "Ahmad et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a different approach for enhancing the performance of a target language independent transfer parser based on a single source language by augmenting the treebank of the source language without using any target language information. For this, we add sentences obtained by rearranging the original sentences in the treebank while keeping the parse tree of the sentence fixed. This can be construed as generating a more general treebank which may contain sentences not conforming syntactically to the source language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, we introduce controlled perturbation in the relative ordering of the head-dependent pairs in the source language parse trees. We stochastically alter the order of some of the headdependent pairs in the source language sentences while keeping the head-dependent relations in the parse trees intact. This perturbation reduces the dependency of the parser on the word order in the training sentences and makes it more robust towards the variation in syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that a stack-pointer network-based parser model (Ma et al., 2018) trained using this treebank results in improvement of the performance of the transfer parser over a baseline parsers trained on an unperturbed treebank. This parser also significantly outperforms the \"orderfree\" parser model proposed by Ahmad et al. (2019) model by 3.8% UAS and 4.2% LAS. We also show that our target language independent approach gives a competitive performance with that of a target language specific transformation approach (Das and Sarkar, 2019) .", "cite_spans": [ { "start": 56, "end": 73, "text": "(Ma et al., 2018)", "ref_id": "BIBREF12" }, { "start": 518, "end": 540, "text": "(Das and Sarkar, 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Initial work on model transfer involved training delexicalized models (Zeman and Resnik, 2008; using only language independent non-lexical features such as PoS tags in the source language treebanks. Several approaches for model transfer that incorporate lexical features in the transfer models have been reported in the literature. These include use of cross-lingual word clustering (T\u00e4ckstr\u00f6m et al., 2012) , dictionary-based mapping of distributed word embeddings and projection-based bi-lingual word representations (Xiao and Guo, 2014; Guo et al., 2015; Schuster et al., 2019; Ahmad et al., 2019) . S\u00f8gaard (2011) proposed an approach for selecting training instances from source language by ranking them in terms of similarity with the target language sentences in terms of PoS tag perplexity. Naseem et al. (2012) ; ; Zhang and Barzilay (2015) presented a multilingual algorithm for dependency parsing that selectively learns the aspects (some features listed in World Atlas of Language Structures (WALS) (Haspelmath, 2005) ) of the source languages relevant to the target language and ties the model parameters accordingly.", "cite_spans": [ { "start": 70, "end": 94, "text": "(Zeman and Resnik, 2008;", "ref_id": "BIBREF30" }, { "start": 383, "end": 407, "text": "(T\u00e4ckstr\u00f6m et al., 2012)", "ref_id": "BIBREF25" }, { "start": 519, "end": 539, "text": "(Xiao and Guo, 2014;", "ref_id": "BIBREF29" }, { "start": 540, "end": 557, "text": "Guo et al., 2015;", "ref_id": "BIBREF8" }, { "start": 558, "end": 580, "text": "Schuster et al., 2019;", "ref_id": "BIBREF20" }, { "start": 581, "end": 600, "text": "Ahmad et al., 2019)", "ref_id": "BIBREF0" }, { "start": 603, "end": 617, "text": "S\u00f8gaard (2011)", "ref_id": "BIBREF23" }, { "start": 799, "end": 819, "text": "Naseem et al. (2012)", "ref_id": "BIBREF15" }, { "start": 1011, "end": 1029, "text": "(Haspelmath, 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another approach for improving the performance of cross-lingual transfer parsers is by transforming the source language parse trees to match the syntax of the target language. Aufrant et al. (2016) improves performance of the transfer parsers by transforming the source language parse trees based on the knowledge of the target language syntax derived from WALS. Das and Sarkar (2019) also proposed a similar source language treebank transformation method in which knowledge of the syntax of a target language is derived from small number annotated target language parse trees. Wang and Eisner (2016) generated synthetic treebanks by altering the word order of the source language treebanks using knowledge of the distribution of the noun and verb dependents of other real-world languages from their respective treebanks. Wang and Eisner (2018) proposed an approach for learning an optimized permutation parameter using the given source language treebank and a gold PoS tag annotated corpus in the target language. This parameter set is then applied to permute the source language parse trees to approximately match the syntax of the target language. These methods are however target language specific and may not perform well for other languages.", "cite_spans": [ { "start": 176, "end": 197, "text": "Aufrant et al. (2016)", "ref_id": "BIBREF2" }, { "start": 363, "end": 384, "text": "Das and Sarkar (2019)", "ref_id": "BIBREF6" }, { "start": 578, "end": 600, "text": "Wang and Eisner (2016)", "ref_id": "BIBREF27" }, { "start": 822, "end": 844, "text": "Wang and Eisner (2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Bhat et al. (2017) have shown that training a parser model using scrambled parse trees of sentences of one domain improves performance of the parser over a parser model trained using the original treebank on test sentences of another domain. They scrambled the parse trees of sentences from newswire data and tested on conversational data. The scrambled treebank consisted either of all possible permutations of a subset of the parse trees in the original treebank, or, a fixed number of permutations of all the parse trees, where the permuted parse trees with the lowest perplexity assigned by a language model are selected. Ahmad et al. (2019) proposed a parser algorithm that improves the quality of transfer parser independent of the target language. They have compared the performance of combinations of different encoder-decoder architectures. They consider a bidirectional LSTM based encoder (ordersensitive) and a transformer-based encoder (orderfree), and, two types of decoders, stack-pointer based (order-sensitive) and a biaffine graph-based (order-free) and have shown that overall best cross-lingual performance of a parser across several target languages can be achieved using the combination of transformer-based encoder and graph-based decoder model. This system is expected to be agnostic to the word order of the source sentence and thus work effectively for a variety of target languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Multi-source transfer (McDonald et al., 2011; Rosa and Zabokrtsky, 2015) parsing approaches combine treebanks of multiple source languages to train cross-lingual transfer parsing models.", "cite_spans": [ { "start": 22, "end": 45, "text": "(McDonald et al., 2011;", "ref_id": "BIBREF14" }, { "start": 46, "end": 72, "text": "Rosa and Zabokrtsky, 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Parse Trees", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perturbation of Source Language", "sec_num": "3" }, { "text": "We now discuss the details of our stochastic perturbation algorithm. We call this perturbation scheme as PTSPert. In order to introduce variation in word order in the source language parse trees, we apply perturbation on each parse tree in the treebank which randomly changes the relative ordering of some head-dependent word pairs in the sentence. For each node in the parse tree, we classify each of its dependents as either pre-dependent or post-dependent based on whether it appears before or after its head word in the sentence. During perturbation, we convert a pre-dependent to post-dependent and vice versa with some probability. The probability of altering the relative position of a dependent with respect to its head word in a sentence is referred to as perturbation probability (P ). The PTSPert algorithm takes the original source language parse tree T s as input and returns the perturbed sentence as output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "For each node n in the parse tree T s , we maintain four lists: pre-modifiers list (initpre n ), post-modifiers list (initpost n ), final pre-modifiers list (f inalpre n ) and final post-modifiers list (f inalpost n ). The pre-modifiers list and postmodifiers list contain the pre-modifiers and postmodifiers of the node in the same sequence as they appear in the original sentence. The final premodifiers list and final post-modifiers list are initially empty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "The steps of the PTSPert algorithm are as follows;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "1. Traverse the words in the sentence from left to right. For each word in the sentence; let w be the node in T s corresponding to the word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "(a) Traverse initpre w from left to right. For each dependent in the list; i. With probability P , append the dependent to f inalpost w ii. With probability 1 \u2212 P , append the dependent to f inalpre w . (b) Traverse initpost w from left to right.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "For each dependent in the list; i. With probability P , append the dependent to f inalpre w ii. With probability 1 \u2212 P , append the dependent to f inalpost w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "The in-order traversal of the perturbed tree T s based on the finalpres and finalposts of the nodes return the sentence with the new word-order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "Example In Figure 2 we present the perturbed version of the sentence \"Now you write your own story\". The words whose positions have changed after perturbation are shown in red and blue. The final sentence after perturbation is \"Now you your story own write\". After perturbation, the subtree with the word \"story\" as the head becomes a pre-dependent of the word \"write\". ( Figure 1b ) and the adjective \"own\" of \"story\" is converted to a post modifier. (Figure 1c ", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 19, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 372, "end": 381, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 452, "end": 462, "text": "(Figure 1c", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parse Tree Structure based Perturbation", "sec_num": "3.1" }, { "text": "Perturbation or introduction of noise in data is not new in natural language processing. It has been used to train a system to reconstruct the original sentence from its corrupted version (Dai and Le, 2015; Hill et al., 2016) . Artetxe et al. (2018) used a perturbation approach in unsupervised machine translation to learn the internal structure of a language and to reduce the dependence on the word order of the sentences to address the differences in the source and target languages. This was done by training an encoder-decoder system to recover the original sentence from its corrupted version given as input.", "cite_spans": [ { "start": 188, "end": 206, "text": "(Dai and Le, 2015;", "ref_id": "BIBREF5" }, { "start": 207, "end": 225, "text": "Hill et al., 2016)", "ref_id": "BIBREF10" }, { "start": 228, "end": 249, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Perturbation Models", "sec_num": "3.2" }, { "text": "In this perturbation method, given a sentence of length N , N/k random swaps are made between the contiguous words, where k is a integer parameter. Artetxe et al. (2018) used k = 2. We call this perturbation approach SwapPert.", "cite_spans": [ { "start": 148, "end": 169, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Perturbation Models", "sec_num": "3.2" }, { "text": "Some target language specific perturbation approaches extensively used in dependency parsing are discussed in Section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Perturbation Models", "sec_num": "3.2" }, { "text": "Data We carried out our experiments using treebanks of 29 languages from the UD v2.2 treebanks. We used the language-independent UD UPOS tags and dependency relations. We have used the acronyms of the language names in the rest of the paper. The full names of the languages are listed in Appendix A.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Parser Model", "sec_num": "4" }, { "text": "We have used 300dimensional fasttext (Bojanowski et al., 2017) pre-trained word embeddings for each language. The cross-lingual word embeddings were obtained by projecting the monolingual embeddings for all the languages into the space of the English language (Smith et al., 2017).", "cite_spans": [ { "start": 37, "end": 62, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embeddings", "sec_num": null }, { "text": "We have experimented with parser models with two types of encoder-decoder based parser models. The models are as follows;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser", "sec_num": "4.1" }, { "text": "\u2022 RS: Stack-pointer-based parser model (Ma et al., 2018) with BiLSTM RNN (Schuster and Paliwal, 1997; Hochreiter and Schmidhuber, 1997 ) based encoder and stack-pointerbased decoder model (Ma et al., 2018 ).", "cite_spans": [ { "start": 39, "end": 56, "text": "(Ma et al., 2018)", "ref_id": "BIBREF12" }, { "start": 73, "end": 101, "text": "(Schuster and Paliwal, 1997;", "ref_id": "BIBREF19" }, { "start": 102, "end": 134, "text": "Hochreiter and Schmidhuber, 1997", "ref_id": "BIBREF11" }, { "start": 188, "end": 204, "text": "(Ma et al., 2018", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Parser", "sec_num": "4.1" }, { "text": "\u2022 TG: Transformer (Vaswani et al., 2017) based encoder with relative position repre-sentation (Shaw et al., 2018) and biaffine graph based decoder (Dozat and Manning, 2017) . This encoder-decoder combination is due to Ahmad et al. (2019).", "cite_spans": [ { "start": 18, "end": 40, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF26" }, { "start": 94, "end": 113, "text": "(Shaw et al., 2018)", "ref_id": "BIBREF21" }, { "start": 147, "end": 172, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Parser", "sec_num": "4.1" }, { "text": "For our experiments, we have used the implementations of the parsers and the corresponding hyperparameter settings by Ahmad et al. (2019). 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser", "sec_num": "4.1" }, { "text": "We carried out the experiments corresponding to the different perturbation approaches under the following settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "SwapPert Given a sentence of length N , for N/k perturbations, we have carried out separate experiments with k = 2 and k = 10. The stackpointer-based parser model (Ma et al., 2018 ) (RS) was trained for this perturbation.", "cite_spans": [ { "start": 163, "end": 179, "text": "(Ma et al., 2018", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "PTSPertRS This refers to the stack-pointerbased parser model (Ma et al., 2018) (RS) parser model trained using a source language treebank augmented with its versions perturbed by PTSPert. We experimented with different perturbation probability values (P \u2208 {0.1, 0.2, 0.3, 0.4, 0.5}).", "cite_spans": [ { "start": 61, "end": 78, "text": "(Ma et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "STATtrans This refers to the stack-pointerbased parser model (Ma et al., 2018) (RS) parser model trained using source language treebank transformed using statistical knowledge of target language syntax derived from samples of 20 target language parse trees (Das and Sarkar, 2019) . For each target language, we randomly sampled 20 parse trees from combined training and development sets. We trained separate models specific to each target language.", "cite_spans": [ { "start": 61, "end": 78, "text": "(Ma et al., 2018)", "ref_id": "BIBREF12" }, { "start": 257, "end": 279, "text": "(Das and Sarkar, 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "RSUnpert This refers to the stack-pointer-based parser model (Ma et al., 2018 )(RS) trained on unperturbed source language treebank.", "cite_spans": [ { "start": 61, "end": 77, "text": "(Ma et al., 2018", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "TGUnpert This is the parser model comprising of a transformer-based encoder and a graph-based decoder (Ahmad et al., 2019) (TG) trained on unperturbed source language treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "All our experiments were repeated 5 times and we report the average result in this paper. Table 1 we report the performance of the RSUnpert, TGUnpert and PTSPertRS (P = 0.2) and STATtrans on 29 target languages with English as the source language. The target languages are ordered according to their typological similarity with the English language based on the metric given by Ahmad et al. (2019) . For the Chinese (zh) and Japanese (ja) languages, we report the results of the delexicalized transfer parsers for a fair comparison with the baseline. The best performance for PTSPertRS was achieved at P =0.2.", "cite_spans": [ { "start": 378, "end": 397, "text": "Ahmad et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 90, "end": 97, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "We observe that perturbation results in an overall improvement in the performance of the crosslingual transfer parsers. Our proposed approach (PTSPertRS) performs better than the RSUnpert baseline parser in case of 24 out of 29 target lan-guages. It improves cross-lingual performance of the transferred parser by 6.69% and 7.74% in terms UAS and LAS respectively. PTSPertRS also performs better than TGUnpert in case of 25 out of 29 target languages and improves average scores by 3.8%UAS and 4.2%LAS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "We also observe that although the PTSPertRS is a target language independent approach it gives better performance than STATtrans in case of 7 languages out of 29 target language. Furthermore, the parser model with transformer-based-encoder and graph-based-decoder (TG) trained using the treebank perturbed by PTSPert also performs better than TGUnpert and RSUnpert. However, it performs slightly worse than PTSPertRS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "In Table 2 we summarize the performance of the different approaches discussed in this paper in terms of UAS% and LAS% averaged over all 29 target languages with English as the source language. We observe that for different values of perturbation probability, PTSPertRS outperforms RSUnpert, TGUnpert and SwapPert. We also observe that SwapPert performs slightly better than RSUnpert for k = 10. Consider the following German sentence (DE) and its English gloss (EN). DE: \"Ich kann diese Tauch schule jeden empfehlen\" EN: I recommend this driving school to everyone. This is parsed by a transfer parser trained on English. The words and relations indicated in red show the errors by RSUnpert parser. The error is possibly because the verb empfehlen occurs at the end and after the object (Tauchschule), whereas the verbs occur before the objects in most English sentences. It is observed that the PTSPert parser correctly parses the sentence. This may have been made possible by perturbation of the source treebank resulting in instances of verb-final occurrences in the augmented treebank.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "In Table 3 we compare the labelled accuracies of PTSPertRS (P = 0.2) with RSUnpert and TGUnpert corresponding to 18 most frequent dependency relations averaged across all the 29 target languages. We observe that PTSPertRS performs better than RSUnpert and TGUnpert in terms of the case, nmod, nsubj, amod, obl, advmod, acl, obj, aux, mark and cc relations.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "However, PTSPertRS performs worse than either RSUnpert or TGUnpert in terms of the advcl, det, cop, nummod, compound, xcomp and flat relations. We note that the group of words related by compound, fixed and flat relations are usually arranged sequentially in a sentence and the dependents with appos relation always follow their respective heads. Thus perturbation with respect to these relations negatively affects the performance of the parsers. Furthermore, TGUnpert performs better than the PTSPertRS model in terms of the det, nummod, cop, iobj and appos relations. We observed that the dependents with cop, nummod and det relations appears before their head words in English. In case of the languages in which the copulas, determiners and numeric modifiers predominantly appears after their head words, the PTSPertRS shows an overall improvement of 17.25%, 4.14% and 50.0% respectively over the TGUnpert model. However, it loses out in terms of average accuracy in case of the other languages by 4.32%, 1.06% and 4.28% respectively. Since these relations appear before their respective heads in majority of the languages which includes English, the overall accuracy is less in terms of these relations. For a dependency relation, we call probability of the dependents occurring before their heads in a language as the precedence probability of that relation in that language. The precedence probability of a relation in a language is measured by the ratio of the number of times the dependents with that relation appear before their heads and the total number of times the relation occurs in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "In our experiments, the precedence probabilities of the relations in the source and target languages are estimated from the corresponding training and test sets respectively. Note that we have used these estimates for analysis of the results only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "In Figure 3 we compare the gain in LAS of PTSPertRS over RSUnpert parser corresponding to 4 different dependency relations over all the target languages. The dependency relations are chosen such that two are short distance relations (intra-phrase): case and auxiliary and two are relatively long-distance relations (inter-phrase): nsubj and obl.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "For all the four dependency relations, we observe that the gains in performance of PTSPertRS over TGUnpert increases with the increase in the difference of precedence probability of the relations in the languages from that of English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "We also observe significant improvement in the performance of the PTSPertRS parsers over TGUnpert in case of the nsubj and obl for most of the language. Only in case of fi and ko languages, both RSUnpert and TGUnpert perform better than PTSPertRS in terms of the nsubj relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "It is also observed that PTSPertRS performs significantly better than RSUnpert and TGUnpert in terms of the aux and case relations for the languages in which the precedence probabilities of the relations are different from that of English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Relation-wise Analysis", "sec_num": "5.2.1" }, { "text": "The results on PTSPertRS discussed above correspond to a single perturbation probability value applied on all the dependency relations. However, we observed that the best accuracies corresponding to different dependency relations were achieved at different P values. Thus we hypothesize that perturbing the dependents of different dependency relations by different amounts might be more helpful. We try to get an estimate of the perturbation probabilities corresponding different dependency relations from the performance of the PTSPertRS models trained using augmented treebanks perturbed with different perturbation probability values on the test set of a small number of languages. For this, we selected a random subset of 9 languages from the 29 languages. The 9 reference languages are es, sl, he, id, sv, de, et, ar and hi.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "The steps for obtaining the probability value corresponding to a dependency relation are as follows;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "\u2022 Corresponding to each P value in {0.0, 0.1, 0.2, 0.3, 0, 4, 0.5}, we find the average accuracy for the dependency relation over the 9 languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "\u2022 We take the P value for the dependency relation for which the highest average accuracy is observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "In Table 4 we present the perturbation probability values used for the different dependency relations.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "We apply these perturbation prob- ability values corresponding to the different dependency relations to perturb the source language parse trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "In Table 5 we present the performances of TGUnpert, RSUnpert, PTSPertRS with fixed P values and the PTSPertRS with variable P values averaged over all the 9 reference languages, 29 target languages and the 20 held-out languages respectively. On the set of the held-out 20 target languages, we observe an improvement of 1.6% UAS and 2.16% LAS over the best single perturbation probability value of (P = 0.2) on the 20 languages. On the set of all the 29 languages also, this perturbation approach results in an overall improvement of 1.66% UAS and 2.49% LAS over the best single perturbation value (P = 0.2).", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "PTSPertRS with Variable Perturbation Probability Values", "sec_num": "5.2.2" }, { "text": "We report here a summary of the results for Hindi as the source language. The variable perturbation probability values were derived from the following languages: es, sl, he, id, sv, de, et, ar and en.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results with Hindi as Source Language", "sec_num": "5.3" }, { "text": "In Table 6 we present the results corresponding to the different transfer approaches averaged over 29 target languages. We observe that PTSPertRS with different values of P outperform RSUnpert and TGUnpert. The best PTSPertRS result is achieved at P =0.3. PTSPertRS with variable P values also performs better than fixed P values. We observe that PTSPert with P =0.3 and variable P performs better than RSUnpert and TGUnpert for 27 out of 29 languages except ko and ja. We observe that ko and ja are syntactically quite close to Hindi and hence a parser model trained on unperturbed treebanks perform better than their perturbed versions.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Results with Hindi as Source Language", "sec_num": "5.3" }, { "text": "In Table 7 we compare the performance of RSUnpert, TGUnpert, PTSPertRS with the P = 0.3 and PTSPertRS with variable P value averaged over all the 9 reference languages, 29 target languages and the 20 held-out languages respectively. We observe that PTSPertRS with variable P values gives the best results.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 7", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Results with Hindi as Source Language", "sec_num": "5.3" }, { "text": "In Table 8 we report the average performance of RSUnpert, TGUnpert, PTSPertRS with the P = 0.3 and PTSPertRS with variable P values on Tamil (ta), Telugu (te), Urdu (ur) and Marathi (mr) languages for which treebanks are available in UD v2.2.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 8", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Results with Hindi as Source Language", "sec_num": "5.3" }, { "text": "We observe that on an average over the four Indian languages, the best UAS and LAS scores are achieved for RSUnpert and TGUnpert respectively. Since the distribution of the dependents with respect to their heads for different dependency relations in the Indian languages are similar to that of Hindi, the best results are obtained for the parsers trained using unperturbed source treebank. This observation is in coherence with the results in English and Hindi where RSUnpert trained on unperturbed treebanks yield better results than PTSPert for the languages syntactically similar to the corresponding sources languages i.e. no and sv for English and ko and ja for Hindi.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results with Hindi as Source Language", "sec_num": "5.3" }, { "text": "We show that our perturbation approach enhances the performance of the \"order-free\" model proposed by Ahmad et al. 2019for most of the 29 source languages. For this, we trained the parser model with transformer-based encoder and graphbased decoder using both unperturbed source language treebanks and PTSPert treebanks perturbed using P =0.1. The performance of the model trained using unperturbed treebank is taken as baseline. Following Ahmad et al. (2019), we trained the models using the first 4000 parse trees of each of the source language treebanks. In Figure 4 , for each language as a source, we show the average improvement over all the target languages in cross-lingual performance of the parser trained using perturbed treebank. The languages are sorted according to their average syntactic distance from the other languages.", "cite_spans": [], "ref_spans": [ { "start": 560, "end": 568, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Performance over Other Source Languages", "sec_num": "5.4" }, { "text": "We observe that perturbation improves the average performance of the transfer parsers for the source languages except pt, sk and ca. The Pearson correlation coefficient of the average improvements with the languages as source with respect to the average distance from other languages is 0.82 indicating that the improvement due to perturbation is strongly correlated with the average distance from the target languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance over Other Source Languages", "sec_num": "5.4" }, { "text": "In this paper propose an approach for introducing perturbation in the source language treebank to improve single source target language independent cross-lingual transfer parsing. We show that this approach indeed helps to improve the performance of the transferred parsers over models trained using only source language treebanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The implementation was obtained from https:// github.com/uclanlp/CrossLingualDepParser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing", "authors": [ { "first": "Zhisong", "middle": [], "last": "Wasi Uddin Ahmad", "suffix": "" }, { "first": "Zuezhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wasi Uddin Ahmad, Zhisong Zhang, Zuezhe Ma, Ed- uard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency pars- ing. In Proceedings of the 2019 Conference of the NAACL: HLT.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised neural machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In International Conference on Learning Representations.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Zero-resource dependency parsing: Boosting delexicalized cross-lingual transfer with linguistic knowledge", "authors": [ { "first": "Lauriane", "middle": [], "last": "Aufrant", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2016, "venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", "volume": "", "issue": "", "pages": "119--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lauriane Aufrant, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2016. Zero-resource dependency parsing: Boosting delexicalized cross-lingual transfer with linguistic knowledge. In COLING 2016, 26th Inter- national Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 119- 130.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Leveraging newswire treebanks for parsing conversational data with argument scrambling", "authors": [ { "first": "A", "middle": [], "last": "Riyaz", "suffix": "" }, { "first": "Irshad", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Dipti", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "61--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riyaz A. Bhat, Irshad Bhat, and Dipti Sharma. 2017. Leveraging newswire treebanks for parsing conver- sational data with argument scrambling. In Proceed- ings of the 15th International Conference on Parsing Technologies, pages 61-66, Pisa, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Semi-supervised sequence learning", "authors": [ { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Dai", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3079--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079-3087.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Transform, combine, and transfer: Delexicalized transfer parser for low-resource languages", "authors": [ { "first": "Ayan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Sudeshna", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2019, "venue": "ACM Trans. Asian Low-Resour. Lang. Inf. Process", "volume": "19", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1145/3325886" ] }, "num": null, "urls": [], "raw_text": "Ayan Das and Sudeshna Sarkar. 2019. Transform, combine, and transfer: Delexicalized transfer parser for low-resource languages. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 19(1):4:1-4:30.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep biaffine attention for neural dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Cross-lingual dependency parsing based on distributed representations", "authors": [ { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the ACL and the 7th IJCNLP", "volume": "1", "issue": "", "pages": "1234--1244", "other_ids": { "DOI": [ "10.3115/v1/P15-1119" ] }, "num": null, "urls": [], "raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th IJCNLP (Volume 1: Long Papers), pages 1234-1244, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The world atlas of language structures", "authors": [ { "first": "Martin", "middle": [], "last": "Haspelmath", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Haspelmath. 2005. The world atlas of language structures / edited by Martin Haspelmath ... [et al.].", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning distributed representations of sentences from unlabelled data", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.03483" ] }, "num": null, "urls": [], "raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. arXiv preprint arXiv:1602.03483.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Comput", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Stackpointer networks for dependency parsing", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zecong", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jingzhou", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1403--1414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack- pointer networks for dependency parsing. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1403-1414.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Universal dependency annotation for multilingual parsing", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Quirmbach-Brundage", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the ACL", "volume": "2", "issue": "", "pages": "92--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Os- car T\u00e4ckstr\u00f6m, et al. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume 2: Short Papers), volume 2, pages 92-97.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Multi-source transfer of delexicalized dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "62--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the conference on EMNLP, pages 62-72. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Selective sharing for multilingual dependency parsing", "authors": [ { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the ACL: Long Papers", "volume": "1", "issue": "", "pages": "629--637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meet- ing of the ACL: Long Papers -Volume 1, ACL '12, pages 629-637, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Cross-lingual syntactic transfer with limited resources", "authors": [ { "first": "Mohammad", "middle": [], "last": "Sadegh", "suffix": "" }, { "first": "Rasooli", "middle": [], "last": "", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "279--293", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. Transactions of the Association for Com- putational Linguistics, 5:279-293.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Low-resource syntactic transfer with unsupervised source reordering", "authors": [ { "first": "Mohammad", "middle": [], "last": "Sadegh", "suffix": "" }, { "first": "Rasooli", "middle": [], "last": "", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2019. Low-resource syntactic transfer with unsu- pervised source reordering. CoRR, abs/1903.05683.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Klcpos3-a language similarity measure for delexicalized parser transfer", "authors": [ { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "Zdenek", "middle": [], "last": "Zabokrtsky", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the ACL and the 7th IJCNLP", "volume": "2", "issue": "", "pages": "243--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudolf Rosa and Zdenek Zabokrtsky. 2015. Klcpos3-a language similarity measure for delexicalized parser transfer. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th IJCNLP (Volume 2: Short Papers), volume 2, pages 243-249.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "M", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [ "K" ], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Signal Processing", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": { "DOI": [ "10.1109/78.650093" ] }, "num": null, "urls": [], "raw_text": "M. Schuster and K. K. Paliwal. 1997. Bidirectional re- current neural networks. IEEE Transactions on Sig- nal Processing, 45(11):2673-2681.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing", "authors": [ { "first": "Tal", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Ori", "middle": [], "last": "Ram", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.09492" ] }, "num": null, "urls": [], "raw_text": "Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of con- textual word embeddings, with applications to zero-shot dependency parsing. arXiv preprint arXiv:1902.09492.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Self-attention with relative position representations", "authors": [ { "first": "Peter", "middle": [], "last": "Shaw", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. CoRR, abs/1803.02155.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Data point selection for crosslanguage adaptation of dependency parsers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the ACL: HLT: short papers", "volume": "2", "issue": "", "pages": "682--686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the ACL: HLT: short papers-Volume 2, pages 682-686. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Target language adaptation of discriminative transfer parsers", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the NAACL: HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. Proceedings of the 2013 Confer- ence of the NAACL: HLT.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Cross-lingual word clusters for direct transfer of linguistic structure", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 conference of the NAACL: HLT", "volume": "", "issue": "", "pages": "477--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Jakob Uszko- reit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 conference of the NAACL: HLT, pages 477- 487. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The galactic dependencies treebanks: Getting more data by synthesizing new languages", "authors": [ { "first": "Dingquan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "491--505", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dingquan Wang and Jason Eisner. 2016. The galactic dependencies treebanks: Getting more data by syn- thesizing new languages. Transactions of the Asso- ciation for Computational Linguistics, 4:491-505.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Synthetic data made to order: The case of parsing", "authors": [ { "first": "Dingquan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on EMNLP", "volume": "", "issue": "", "pages": "1325--1337", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dingquan Wang and Jason Eisner. 2018. Synthetic data made to order: The case of parsing. In Proceedings of the 2018 Conference on EMNLP, pages 1325- 1337.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Distributed word representation learning for cross-lingual dependency parsing", "authors": [ { "first": "Min", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yuhong", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "119--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 119-129.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Cross-language parser adaptation between related languages. NLP for Less Privileged Languages", "authors": [ { "first": "D", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "35--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. NLP for Less Privileged Languages, pages 35 -35.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Association for Computational Linguistics. A Appendices A.1 Language Name Abbreviations en -English, no -Norwegian, sv -Swedish, fr -French, pt -Portuguese, da -Danish, es -Spanish, it -Italian, hr -Croatian, ca -Catalan, pl -Polish, uk -Ukranian, sl -Slovenian, bg -Bulgarian, ru -Russian, de -German, he -Hebrew, cs -Czech", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. Association for Computational Linguistics. A Appendices A.1 Language Name Abbreviations en -English, no -Norwegian, sv -Swedish, fr - French, pt -Portuguese, da -Danish, es -Spanish, it -Italian, hr -Croatian, ca -Catalan, pl -Polish, uk -Ukranian, sl -Slovenian, bg -Bulgarian, ru -Russian, de -German, he -Hebrew, cs -Czech, ro -Romanian, sk -Slovak, id -Indonesian, fi - Finnish, et -Estonian, zh -Chinese, ar -Arabic, la -Latin, ko -Korean, hi -Hindi, ja -Japanese.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Perturbation on an English sentence.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": ")", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Parses of a German sentence.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "The blue and the red lines indicate the gain in LAS by PTSPertRS and TGUnpert over RSUnpert respectively. The black line indicates the precedence probabilities of the dependency relations in the language. The languages are sorted on the precedence probabilities from low to high. RS: RSUnpert, TG: TGUnpert, PTS: PTSPertRS", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "The red curve indicates average improvement over baseline for a language as source. The blue curve and the right y-axis indicates the average distance of a language from the rest.", "uris": null, "num": null }, "TABREF1": { "num": null, "type_str": "table", "content": "", "text": "", "html": null }, "TABREF3": { "num": null, "type_str": "table", "content": "
root
nsubj
aux
obj
detiobj
Ich kann diese Tauchschule jeden empfehlen
(a) Parser output of PTSPertRS model
root
nsubj
copacl
detnsubj
Ich kann diese Tauchschule jeden empfehlen
(b) Parser output of RSUnpert model
", "text": "Comparison of average performance of different transfer approaches.", "html": null }, "TABREF5": { "num": null, "type_str": "table", "content": "
: Dependency-wise average accuracies of
RSUnpert, TGUnpert and PTSPertRS (P =0.2).
", "text": "", "html": null }, "TABREF7": { "num": null, "type_str": "table", "content": "", "text": "Perturbation probability values corresponding to the different dependency relations.", "html": null }, "TABREF9": { "num": null, "type_str": "table", "content": "
", "text": "Average %UAS/%LAS over different sets of target languages for RSUnpert, TGUnpert, PTSPertRS (P =0.2) and PTSPertRS with variable P .", "html": null }, "TABREF11": { "num": null, "type_str": "table", "content": "
PTS-
No. ofRSU-TGU-PTS-PertPert (Var.
langsnpertnpert(0.3)P )
9 referenceUAS31.836.750.852.4
languagesLAS22.427.137.739.6
20 held-outUAS38.242.353.854.7
languagesLAS27.831.640.842.0
29 targetUAS36.240.552.954.0
languagesLAS26.130.239.941.2
", "text": "Average UAS%/LAS% of different transfer parser approaches with Hindi as the source language.", "html": null }, "TABREF12": { "num": null, "type_str": "table", "content": "
PTS-
RSU-TGU-PTS-PertPert (Var.
npertnpert(0.3)P )
UAS75.974.973.974.4
LAS55.655.954.855.5
", "text": "Average %UAS/%LAS over different sets of target languages for different parsing approaches with Hindi as source language.", "html": null }, "TABREF13": { "num": null, "type_str": "table", "content": "", "text": "Average %UAS/%LAS over ta, te, mr AND ur for different parsing approaches with Hindi as source language.", "html": null } } } }