|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:13:48.521191Z" |
|
}, |
|
"title": "Do not neglect related languages: The case of low-resource Occitan cross-lingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Woller", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "lisa_woller@web.de" |
|
}, |
|
{ |
|
"first": "Viktor", |
|
"middle": [], |
|
"last": "Hangya", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "hangyav@cis.lmu.de" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "fraser@cis.lmu.de" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Cross-lingual word embeddings (CLWEs) have proven indispensable for various natural language processing tasks, e.g., bilingual lexicon induction (BLI). However, the lack of data often impairs the quality of representations. Various approaches requiring only weak crosslingual supervision were proposed, but current methods still fail to learn good CLWEs for languages with only a small monolingual corpus. We therefore claim that it is necessary to explore further datasets to improve CLWEs in low-resource setups. In this paper we propose to incorporate data of related high-resource languages. In contrast to previous approaches which leverage independently pre-trained embeddings of languages, we (i) train CLWEs for the low-resource and a related language jointly and (ii) map them to the target language to build the final multilingual space. In our experiments we focus on Occitan, a low-resource Romance language which is often neglected due to lack of resources. We leverage data from French, Spanish and Catalan for training and evaluate on the Occitan-English BLI task. By incorporating supporting languages our method outperforms previous approaches by a large margin. Furthermore, our analysis shows that the degree of relatedness between an incorporated language and the low-resource language is critically important.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Cross-lingual word embeddings (CLWEs) have proven indispensable for various natural language processing tasks, e.g., bilingual lexicon induction (BLI). However, the lack of data often impairs the quality of representations. Various approaches requiring only weak crosslingual supervision were proposed, but current methods still fail to learn good CLWEs for languages with only a small monolingual corpus. We therefore claim that it is necessary to explore further datasets to improve CLWEs in low-resource setups. In this paper we propose to incorporate data of related high-resource languages. In contrast to previous approaches which leverage independently pre-trained embeddings of languages, we (i) train CLWEs for the low-resource and a related language jointly and (ii) map them to the target language to build the final multilingual space. In our experiments we focus on Occitan, a low-resource Romance language which is often neglected due to lack of resources. We leverage data from French, Spanish and Catalan for training and evaluate on the Occitan-English BLI task. By incorporating supporting languages our method outperforms previous approaches by a large margin. Furthermore, our analysis shows that the degree of relatedness between an incorporated language and the low-resource language is critically important.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Cross-lingual word embeddings (CLWEs) are important for a wide range of NLP tasks including bilingual lexicon induction (BLI) (Vuli\u0107 and Korhonen, 2016; Patra et al., 2019) , Machine Translation , and cross-lingual transfer learning (Xiao and Guo, 2014; Schuster et al., 2019) . Two main types of approaches to learn CLWEs are mapping methods, where a set of pretrained monolingual embeddings is projected into another monolingual space (Mikolov et al., 2013) , and joint methods, where the monolingual and cross-lingual objectives are optimized jointly (e.g., Klementiev et al., 2012; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 152, |
|
"text": "(Vuli\u0107 and Korhonen, 2016;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 172, |
|
"text": "Patra et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 253, |
|
"text": "(Xiao and Guo, 2014;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 276, |
|
"text": "Schuster et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 459, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 585, |
|
"text": "Klementiev et al., 2012;", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since recent research is more and more interested in dealing with low-resource languages, learning multilingual representations for low-resource languages is important as well Kementchedjhieva et al., 2018; . However, a lack of parallel data impairs the performance of existing strongly supervised models, which is why a lot of recent research focuses on reducing the need for parallel data (Artetxe et al., 2017; Smith et al., 2017; Artetxe et al., 2018; . Mapping methods are sensitive to the approximate isomorphism of embedding spaces, which is not the case for many languages . The low isomorphism of distant language pairs was tackled by learning CLWEs jointly Ormazabal et al., 2019; Devlin et al., 2019) . However, they rely on large monolingual corpora which are not available for many languages. Furthermore, the lack of large data leads to low isomorphism as well, since it results in low-quality monolingual embedding spaces (Michel et al., 2020) . Hence, mapping methods, which rely on the assumption of approximate isomorphism cannot be fruitfully applied in many cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 206, |
|
"text": "Kementchedjhieva et al., 2018;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 413, |
|
"text": "(Artetxe et al., 2017;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 433, |
|
"text": "Smith et al., 2017;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 455, |
|
"text": "Artetxe et al., 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 690, |
|
"text": "Ormazabal et al., 2019;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 711, |
|
"text": "Devlin et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 937, |
|
"end": 958, |
|
"text": "(Michel et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, as there are still only poor CLWEs for many low-resource language pairs , we argue that in addition to reducing requirements for training data, methods which offer opportunities precisely for low-resource setups, like leveraging data from linguistically related high-resource languages, should be considered as well. While there exist NLP systems that make use of related languages, e.g., in Machine Translation (Nakov and Ng, 2012; Nguyen and Chiang, 2017) , only few work focuses on including them directly into CLWEs. An approach considering a related language in order to improve CLWEs for low-resource language pairs, including English-Occitan, has been proposed by Kementchedjhieva et al. (2018) . However, using pre-trained monolingual embedding spaces, they do not take into account that monolingual representations of lowresource languages might be of poor quality, which can impede mapping performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 441, |
|
"text": "(Nakov and Ng, 2012;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 466, |
|
"text": "Nguyen and Chiang, 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 680, |
|
"end": 710, |
|
"text": "Kementchedjhieva et al. (2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a method where, in contrast to previous work, we consider both addressing the issue of monolingual embedding quality and leveraging information from a supporting language. To this end, we learn multilingual representations for a low-resource source language, a related language, and a target language in two steps: First, we train CLWEs for the low-resource language and the related higher-resource language using the joint-align approach by Wang et al. (2020) . In that manner, the internal structure of the low-resource embeddings becomes more similar to the structure of the higher-quality related language embeddings. In the second step, we map the resulting CLWE space to the target space using the supervised MUSE model . Since the first step results in a higher-quality embedding space for the source language, a better mapping to the target space can be found due to their higher isomorphism.", |
|
"cite_spans": [ |
|
{ |
|
"start": 468, |
|
"end": 486, |
|
"text": "Wang et al. (2020)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our experiments, we learn representations for Occitan together with a related language and English. Occitan is a low-resource Romance language, which is related to high-resource languages like French and Spanish, and especially closely related to Catalan. Since particularly good CLWEs exist for each of the three related languages paired with English, we make use of monolingual data from these languages in order to obtain better representations for Occitan and English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "By evaluating our final multilingual embedding space on the Occitan-English BLI task, we show that our method improves CLWEs for these languages compared to all the baseline settings. Furthermore, we find that there are significant differences in how much of an improvement is achieved with each of the supporting languages. Investigating the impact of multiple factors, such as the pairwise linguistic relatedness of the source, target and the related languages, their BLI performance and the dataset sizes of the individual languages, we found the relatedness of the low-resource and the related language to be most influential. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Occitan language Occitan is a Romance language which is spoken in the south of France, in the Aran Valley (a part of Catalonia, Spain), in a small region in Italy at the French border and in Monaco (see Figure 1 1 , where the ensemble of all colored areas represents the Occitan-speaking territory). However, it is not used as a primary language in any of these countries and it only has an official status in Catalonia.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 217, |
|
"text": "Figure 1 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The language the closest related to Occitan is Catalan and they both belong to the Occitano-Romance languages (Bec, 1970) . It is also closely related to other Romance languages, e.g., French and Spanish. Occitan is (like all Romance languages) an inflectional language which is morphologically richer than English: there is no case inflection, but it has a rather complex inflectional system for verbs. Occitan word order follows the subject-verb-object regularities and it is therefore syntactically very similar to English. However, like Spanish and Catalan, but unlike French and English, Occitan is a so-called pro-drop language, i.e., a conjugated verb can be used without a personal pronoun and hence the subject position does not necessarily have to be filled in an Occitan sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 121, |
|
"text": "(Bec, 1970)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The exact number of speakers of Occitan is not known for certain. Most sources report numbers between 1 and 10 millions, and there are significantly more people with passive knowledge of Occitan than active speakers (Cichon, 2002, pp. 19f) . Furthermore, rather than one Occitan language, there are many different dialects (see Figure 1) . However, the Languedocian variant is mostly used in written Occitan and thus in the Occitan Wikipedia, which we use for our experiments. Due to these factors the amount of available written digital resources is low.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 239, |
|
"text": "(Cichon, 2002, pp. 19f)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 337, |
|
"text": "Figure 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "CLWEs for low-resource setups A lot of research on CLWEs for low-resource languages focuses on reducing the need for cross-lingual data. Zhang et al. (2017) use adversarial training for aligning monolingual vector spaces without any bilingual signal. propose an unsupervised mapping method where they combine adversarial training with a Procrustes Analysis refinement step in every iteration. learn CLWEs jointly for their unsupervised neural machine translation model by concatenating corpora of source and target languages and training fastText skipgram embeddings (Bojanowski et al., 2017) on this corpus. In order to combine the benefits of joint and mapping methods, Wang et al. (2020) propose an approach where they combine both methods. First, CLWEs are trained jointly on a concatenated corpus containing monolingual source and target language data. Oversharing among source and target language vocabularies is then reduced by a vocabulary reallocation step, and finally, source embeddings are mapped to the target embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 156, |
|
"text": "Zhang et al. (2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 690, |
|
"text": "Wang et al. (2020)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, despite the progress of unsupervised CLWE models, multiple surveys argue against focusing on fully unsupervised approaches. Firstly, giving up on every supervision signal is not necessary, since there is always a small amount of parallel data available if monolingual data is abundant (Artetxe et al., 2020) . Secondly, show that even the most robust unsupervised approach (Artetxe et al., 2018) cannot deal properly with multiple distant and low-resource languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 316, |
|
"text": "(Artetxe et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 404, |
|
"text": "(Artetxe et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Nevertheless, there are still a lot of languages for which even monolingual data is extremely scarce. For these languages, monolingual embeddings are usually of poor quality (Michel et al., 2020) . Consequently, mapping methods are not fruitfully applicable, since they rely on high-quality monolingual embedding spaces. Adams et al. (2017) show that monolingual embedding quality of extremely lowresource languages can be improved if CLWEs for a low-and a high-resource language are trained jointly. Eder et al. (2021) propose a method for better CLWEs by using a small bilingual seed dictionary together with pre-trained monolingual embeddings of the higher-resource language for initialization. On the other hand, these approaches rely only on the source and target languages, while we show the benefits of incorporating further related languages into a multilingual space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 195, |
|
"text": "(Michel et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 340, |
|
"text": "Adams et al. (2017)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 519, |
|
"text": "Eder et al. (2021)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Leveraging related languages Besides reducing data requirements, it is also helpful to explore information from linguistically related highresource languages in low-resource setups. This idea has, for example, been considered in Machine Translation (MT). Nakov and Ng (2012) propose a statistical MT model which requires only a small parallel corpus of the low-resource source and the high-resource target languages, and additionally a larger parallel corpus of a related high-resource language and the target language. Nguyen and Chiang (2017) introduce a transfer learning model for neural MT (NMT) where embeddings of shared words are kept when transferring the model from the original to a related low-resource language. Gu et al. 2018train a NMT model where embeddings learned during training are computed from a universal embedding space which embed multiple languages. Thus, high-resource languages can provide support for related low-resource languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 274, |
|
"text": "Nakov and Ng (2012)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Leveraging information from related highresource languages to build CLWEs for lowresource setups has only been considered in a few works until now. Multiple approaches were proposed to build representations involving more than two languages, but they either rely on pretrained monolingual embeddings (Ammar et al., 2016; Heyman et al., 2019; Chen and Cardie, 2018; Alaux et al., 2018) or large training corpora (Devlin et al., 2019) , and are thus not well suited for low-resource setups. Kementchedjhieva et al. (2018) proposed Multi-support Generalized Procrustes Analysis (MGPA) to directly incorporate related languages into CLWEs by learning a threeway alignment among English, a low-resource language, and a supporting language. They improve CLWE quality for multiple low-resource language pairs, including Occitan-English. However, unlike our method, MGPA does not consider the internal structure of the monolingual low-resource language space (since it relies on pre-trained monolingual embeddings).", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 320, |
|
"text": "(Ammar et al., 2016;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 341, |
|
"text": "Heyman et al., 2019;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 364, |
|
"text": "Chen and Cardie, 2018;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 384, |
|
"text": "Alaux et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 432, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 519, |
|
"text": "Kementchedjhieva et al. (2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To improve CLWEs for low-resource setups, we incorporate a related language by learning representations in two steps: First, we train CLWEs for the low-resource and a related language jointly. Subsequently, we use the resulting joint space to-gether with a set of monolingual target language embeddings to learn the final multilingual space including the low-resource, the supporting, and the target languages. We detail the two steps below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Joint alignment In the first step of our model, we train CLWEs for a low-resource and a related language jointly. This helps to make the internal structure of the low-resource embeddings more similar to the structure of the related language space. Since isomorphism of vector spaces is correlated with mapping performance Ormazabal et al., 2019) and given that high-quality alignments among English and the supporting language exist, joint training of a low-resource and a related language allows for achieving a better mapping among the low-resource language and English as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 345, |
|
"text": "Ormazabal et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Instead of simply building embeddings on the concatenated corpora of the two languages , we use the joint-align model proposed by Wang et al. (2020) . In their approach, CLWEs are learned in three steps, which we outline in the following. First, unsupervised joint training is performed by running fastText skip-gram (Bojanowski et al., 2017) on the concatenated corpus consisting of monolingual data from both languages (L 1 and L 2 ). Since related languages share part of their vocabulary, these words act as a cross-lingual signal to automatically align the vectors of the two languages. However, this step suffers from vocabulary oversharing, i.e., the corpus of L 1 contains words which are only part of the vocabulary of L 2 due to noise and vice-versa, which leads to errors. To mitigate the issue, vocabulary reallocation is performed in the second step, where words are assigned to one of three sets: the vocabulary of only L 1 , only L 2 or the so-called shared vocabulary. The reallocation is decided based on the frequency ratio of a given word in the two corpora. Using a threshold value, if a word is mainly appearing in the corpus of L 1 or L 2 , it is allocated to the language specific vocabulary, otherwise it is kept in the shared vocabulary. Finally in step three, the language specific embeddings are refined by mapping word embeddings of L 1 to L 2 in order to improve the final CLWE quality. The resulting CLWE space thus consists of embeddings of shared words and aligned embeddings of non-shared words among the two languages. Mapping In the second component of our approach, we use MUSE to map the embeddings resulting from joint-align training with the monolingual target language embeddings. We use the supervised version of the MUSE model, which we find to work better for our embeddings than the unsupervised version. In addition, supervised MUSE yields good results when training with identical character strings as a supervision signal (Kementchedjhieva et al., 2018) . We consider this supervision method in our experiments as well to ensure that a small training dictionary is not holding back performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 148, |
|
"text": "Wang et al. (2020)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 342, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1969, |
|
"end": 2000, |
|
"text": "(Kementchedjhieva et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Corpora and vocabulary We pursue our experiments for the low-resource Occitan language and we choose French, Spanish, and Catalan as supporting languages. Like Occitan, they are all Romance languages and hence they all have a partly shared vocabulary with Occitan as well as some similarities in morphology and syntax. French and Spanish have been chosen because they are very high-resource. Catalan has been chosen because it is the language the closest related to Occitan. Furthermore, it has been shown that for all three languages, very good CLWEs together with English can be obtained . We extract Occitan, French, Spanish, and Catalan corpora from respective Wikipedia dumps. 2 Corpora and vocabulary sizes are listed in Table 1. 2 Available at https://dumps.wikimedia.org/. They are preprocessed using the tools available at:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 727, |
|
"end": 735, |
|
"text": "Table 1.", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://www.kdnuggets.com/2017/11/ building-wikipedia-text-corpus-nlp.html. Furthermore, in Table 2 , we report vocabulary sizes of the joint corpora used for training the Occitanrelated language CLWEs. We also include the number and proportion of shared words per language pair in this table.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 99, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Embeddings In all our experiments, we used the pre-trained English fastText wiki word vectors released by Bojanowski et al. (2017) . 3 For Occitan, French, Spanish, and Catalan, we train our own monolingual embeddings using the Gensim version of fastText skipgram (\u0158eh\u016f\u0159ek and Sojka, 2010) with the same parameters used for the pre-trained English embeddings. This is to ensure that they are learned on the same corpora than the embeddings in our proposed model. The monolingual Occitan embedding space used for our baselines contains 111,353 word vectors. All the other monolingual spaces are restricted to the most frequent 200,000 words for training. The smaller number of Occitan embeddings is due to the small corpus and the threshold of at least five occurrences for a word to be considered when training fastText embeddings. The number of embeddings resulting from joint-align training with Occitan and each of the supporting languages is shown in Figure 2 . Here, the proportion of Occitan, related language, and shared word vectors is illustrated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 130, |
|
"text": "Bojanowski et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 955, |
|
"end": 963, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Parameters We compare the performance of our model against multiple baselines. We use supervised MUSE and Generalized Procrustes Analysis, an extension of MUSE (GPA; Kementchedjhieva et al., 2018) , as baseline models where a mapping between monolingual Occitan and monolingual English embeddings is performed. In addition, we train three baselines using Multi-support GPA (MGPA; Kementched-jhieva et al., 2018) where pre-trained monolingual embeddings from either French, Spanish or Catalan are incorporated. We use all baseline models with default parameters except the threshold for ranking candidate translation pairs, which we set to 15,000 instead of default 10,000 in all models, since it results in a better alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 196, |
|
"text": "Kementchedjhieva et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the first step of our proposed model, we use the joint-align model (Wang et al., 2020) for Occitan and a related language with default parameters. The only exception is that we use supervised MUSE for mapping instead of default RCSLS in order to stay consistent with the second mapping step in our model. We tested using RCSLS in both steps instead, but it did not yield a good mapping for Occitan and English. We use supervised MUSE with the same parameters as in our baseline, both within joint-align training and in the second step of our proposed model. Evaluation task Our evaluation task is bilingual lexicon induction (BLI). We use it to evaluate the quality of our final multilingual embedding spaces, translating from Occitan to English. We also use it for evaluating the shared Occitan and related language spaces resulting from the first step of our model. For this purpose, we run the MUSE evaluation script and we report scores achieved with CSLS retrieval.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 89, |
|
"text": "(Wang et al., 2020)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We extract training dictionaries for English \u2192 Occitan (En-Oc), Occitan \u2192 English (Oc-En), Occitan \u2192 French (Oc-Fr), and Occitan \u2192 Spanish (Oc-Es) from freelang. tracted from an Occitan website. 5 For English \u2192 Occitan, we use the test dictionary that Kementchedjhieva et al. (2018) extracted from this website. We clean all the dictionaries manually in a manner that they only contain 1-to-1 pairs and that source words appearing in both the initial training and test dictionary of a certain source language \u2192 target language pair are discarded from the training dictionary. For the Occitan-Catalan (Oc-Ca) language pair, there is to our knowledge no comparable bilingual dictionary available online. We therefore create our own training and test dictionaries by extracting a Catalan \u2192 French dictionary from freelang 6 and using it together with our Occitan \u2192 French dictionaries for mapping Occitan and Catalan words that have the same translation into French. In addition, we check the resulting dictionaries manually to avoid improper translation pairs. Dictionary sizes are reported in Table 3 . Note that especially our training dictionaries vary significantly in size, since we use all the words available from our sources for every language pair. Unfortunately, due to copyright restrictions, we are not able to release dictionaries based on freelang. Please follow the above instructions to recreate them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 282, |
|
"text": "Kementchedjhieva et al. (2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1092, |
|
"end": 1099, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Bilingual dictionaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Furthermore, we use French-English, Spanish-English, and Catalan-English training dictionaries available from MUSE .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilingual dictionaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We show the results for Occitan \u2192 English BLI yielded by the baselines and our model in Table 4 , Settings a-f and 1-6, respectively. Note that as the mapping direction in case of MUSE and GPA, Occitan was taken as the source and English as the target language. MGPA, however, can only be trained with the low-resource and the related language on the target language side. We evaluated the resulting CLWEs for Occitan \u2192 English afterwards. For MGPA and our model, results for incorporating either French, Spanish, or Catalan are listed separately in different columns. Furthermore, Settings 1-6 of our model vary in two more dimensions. Firstly, we employ two different subsets of the shared Occitan-related language space as source embeddings: In Settings 1-4, we use the 'full space' containing vectors of words contained in the shared and language specific (Occitan and the given related language) vocabularies. In Settings 5-6, we use a 'reduced space' containing only the vectors of shared and Occitan vocabularies. Secondly, we experiment with various bilingual supervision signals: the Occitan-English training dictionary (oc-en), the dictionary of the respective incorporated related language and English (rel-en), both training dictionaries concatenated (full), or identical character string supervision (id char). In settings where the reduced source embedding space is used, we omit training with the 'rel-en' and 'full' dictionaries, since the related language words are excluded from the embedding space.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 96, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "It can be seen from Table 4 that all 18 settings of our model outperform all the baseline models, i.e., regardless of which language we use for support, which subset of the shared Occitan-related language space we employ, and which initial supervision signal we use. However, there are significant differences in performance across the various settings: Relative improvements compared to the strongest baseline (MGPA ca) are between 2.78% and 15.47%. We discuss these differences in the following.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 27, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Support from related language words Having a closer look at the numbers in Table 4 , it becomes obvious that for every incorporated language, Settings 1-4 (full space) yield better scores than Settings 5-6 (reduced). The only exception is Setting 5 in the experiments with Spanish. More precisely, if related language words are considered during training, P@1 for Occitan-English BLI is up to 4.4% higher than in settings where only Occitan and shared words are included. This shows that in terms of representing the low-resource language together with English, the multilingual embedding space containing low-resource, related language, and English words is of higher quality than the embedding space with only low-resource language French Spanish Catalan P@1 P@10 P@1 P@10 P@1 P@ Table 4 : Results for Occitan \u2192 English BLI achieved by various baselines and our model. The best P@1 and P@10 scores per incorporated language are underlined, while bold indicates the overall best. 'Full space' denotes using the ensemble of Occitan + related language + shared source embeddings for mapping, while the 'reduced' space only consists of Occitan + shared words. The 'full' training dictionary is a concatenation of the Occitan \u2192 English (oc-en) and the incorporated related language \u2192 English (rel-en) dictionaries.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 82, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 782, |
|
"end": 789, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "and English words. The reason for this is that the related language does not only help to build better representations for the low-resource language in step 1 (joint-alignment) of our model, but it also helps to build a better mapping in step 2. This is due to the iterative refinement of MUSE which can update the initial training dictionary with goodquality related language-English word pairs as well in addition to the Occitan-English pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Comparing performance across the different supporting languages shows that incorporating Catalan leads by far to the largest improvements (up to 15.5% P@1 compared to the strongest baseline), while French and Spanish only contribute to smaller improvements (up to 6.4% and 7.3% P@1, respectively). We investigated multiple factors to find out where these differences come from: the quality of the Occitan-related language CLWEs, the quality of the related language-English CLWEs, and the linguistic relatedness of Occitan and an incorporated language, among others. For this purpose, we evaluate the Occitan-related language CLWEs resulting from the first step of our model as well as the embedding spaces resulting from the second step of our model on the BLI task for the respective language pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Differences across incorporated languages", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Language pair P@1 P@10 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Occitan Catalan \u2192 English 67.97 83.58 Table 5 : Results for BLI. 1-3: Occitan-related language CLWEs resulting from the first step of our model. 4-6: Multilingual space resulting from the second step of our model. Table 5 show that the quality of the CLWE spaces mentioned above cannot explain that Catalan provides the best support for Occitan-English CLWEs. This is because results for French and Spanish are even better than scores for Catalan. Note, however, that Settings 1-3 in Table 5 are not completely comparable, since our test dictionaries do not contain the exact same word pairs for every language pair. Nevertheless, by evaluating the Occitan-related language CLWEs we show that the shared Occitan-Catalan space is not clearly better than the other two CLWE spaces in terms of BLI performance and thus this aspect is not responsible for the better quality of the final multilingual embeddings resulting from our model. The same holds for the evaluation of the related language-English CLWEs in Settings 4-6. The degree of linguistic relatedness to Occitan, however, is the only factor where Catalan is clearly more favorable than French and Spanish (as described in Section 2). Consequently, we can infer that it is the decisive factor for how much support an incorporated language provides for learning better Occitan-English CLWEs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 221, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 491, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "No.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "English \u2192 Occitan direction In another set of experiments, we switch source and target languages to examine how our model performs when translating from English to Occitan. For completeness, we do not only reverse the evaluation direction but the mapping direction of the used CLWEs in step 2 of our approach as well, i.e., we use the pre-trained monolingual English embeddings as source and map them to the shared Occitan-related language space resulting from the first step of our model as before. We show our results for English \u2192 Occitan in Table 6 , including the results of our baseline models for the same mapping direction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 545, |
|
"end": 552, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The numbers in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We find that, contrarily to our experiments for the Occitan \u2192 English direction, our approach cannot clearly improve P @1 on the English \u2192 Occitan BLI task. Checking the nearest neighbors of English test source words in our shared Occitan-French space reveals that it is very French-centric. In many cases, a French word is retrieved as the nearest neighbor of an English word, as shown in Table 7 . This problem does not occur in the baselines due to no shared embeddings between languages. On the other hand, the phenomenon affects other multilingual models with shared vocabularies as well, such as mBERT (Devlin et al., 2019) , which are mainly used for downstream tasks, e.g., zero-shot cross-lingual transfer learning. To mitigate the issue, we experimented with excluding Source word MUSE Our model age edat \u00e2ge bird auc\u00e8l oiseau bank banca bank Table 7 : Examples of English source words and their nearest neighbors in the Occitan embeddings before and after incorporating French (bold: correct Occitan translation; underlined correct French translation).", |
|
"cite_spans": [ |
|
{ |
|
"start": 608, |
|
"end": 629, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 397, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 853, |
|
"end": 860, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The numbers in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "either French only or French only and shared words from the translation candidates, respectively. However, it did not solve the issue, since the shared vocabulary includes a large number of relevant French and Occitan words, which leads to either noise or missing Occitan words depending on their inclusion as translation candidates. On the other hand, P@10 scores achieved by our model are comparable and even significantly higher in case of Catalan than the baseline scores. This indicates that although not being the top 1 retrieved translation, the correct Occitan translation can be found in the near neighborhood of an English source word, indicating the good quality of our CLWEs. Consequently, our embeddings are still useful for various downstream tasks in the English \u2192 Occitan direction. For instance, when using them for cross-lingual transfer learning, e.g., classifying Occitan texts using a model trained on English, noise in the Occitan target space stemming from the related language vocabulary is not an issue, since the inputs to be classified are wellformed Occitan sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The numbers in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we presented a model for improving CLWE quality in low-resource setups by learning multilingual embedding spaces with a related language. To this end, a multilingual embedding space containing the low-resource source language, a related language, and the target language words is learned in two steps: first joint training of lowresource and related language embeddings; and second mapping the resulting CLWEs to a target language space. We pursued our experiments for the low-resource language Occitan with support from French, Spanish, or Catalan in different settings. We showed that our method improves the quality of CLWEs for these languages compared to both bilingual and multilingual baselines, especially when Catalan, the closest related language to Occitan, is incorporated (up to 15.5% P@1 improvement). Investigating multiple factors, we found that the degree of linguistic relatedness of the low-resource and the incorporated language is the most decisive for how much support a language provides. Our work indicates that novel approaches should not only focus on learning better representations using small corpora but also on incorporating data from related languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The illustration is available at http://lowlands-l.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available at https://fasttext.cc/docs/en/ pretrained-vectors.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "They are based on the dictionaries available at https://www.freelang.net/dictionary/ occitan.php (for Oc-Fr and Oc-Es see linked French and Spanish versions of freelang).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.occitania.online.fr/aqui. comenca.occitania/dicolist.html 6 Available at https://www.freelang.com/ dictionnaire/catalan.php.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 640550) and by German Research Foundation (DFG; grant FR 2829/4-1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Cross-lingual word embeddings for low-resource language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Makarucha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "937--947", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language model- ing. In Proceedings of EACL, pages 937-947.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Unsupervised Hyper-alignment for Multilingual Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Alaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Cuturi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceeding of IRLC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Alaux, Edouard Grave, Marco Cuturi, and Ar- mand Joulin. 2018. Unsupervised Hyper-alignment for Multilingual Word Embeddings. In Proceeding of IRLC.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Massively multilingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Mulcaire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. CoRR, abs/1602.01925.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning bilingual word embeddings with (almost) no bilingual data", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "451--462", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of ACL, pages 451-462.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "789--798", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Pro- ceedings of ACL, pages 789-798.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Gorka Labaka, and Eneko Agirre. 2020. A call for more rigor in unsupervised cross-lingual learning", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7375--7388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Sebastian Ruder, Dani Yogatama, Gorka Labaka, and Eneko Agirre. 2020. A call for more rigor in unsupervised cross-lingual learning. In Proceedings of ACL, pages 7375-7388.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Manuel pratique de philologie romane", |
|
"authors": [ |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Bec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pierre Bec. 1970. Manuel pratique de philologie ro- mane. Picard.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unsupervised Multilingual Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Xilun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xilun Chen and Claire Cardie. 2018. Unsupervised Multilingual Word Embeddings. In Proceedings of EMNLP, pages 261-270.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Einf\u00fchrung in die Okzitanische Sprache", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Peter Cichon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Cichon. 2002. Einf\u00fchrung in die Okzitanische Sprache. Romanistischer Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Word Translation Without Parallel Data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word Translation Without Parallel Data. In Proceed- ings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL: HLT, pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Anchor-based Bilingual Word Embeddings for Low-Resource Languages", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Eder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viktor", |
|
"middle": [], |
|
"last": "Hangya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of ACL-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "227--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Eder, Viktor Hangya, and Alexander Fraser. 2021. Anchor-based Bilingual Word Embeddings for Low-Resource Languages. In Proceedings of ACL-IJCNLP, pages 227-232.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Universal neural machine translation for extremely low resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "344--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of NAACL: HLT, pages 344-354.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning Unsupervised Multilingual Word Embeddings with Incremental Multilingual Hubs", |
|
"authors": [ |
|
{ |
|
"first": "Geert", |
|
"middle": [], |
|
"last": "Heyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bregt", |
|
"middle": [], |
|
"last": "Verreet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Francine", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1890--1902", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geert Heyman, Bregt Verreet, Ivan Vuli\u0107, and Marie- Francine Moens. 2019. Learning Unsupervised Mul- tilingual Word Embeddings with Incremental Multi- lingual Hubs. In Proceedings of NAACL-HLT, pages 1890-1902.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2979--2984", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of EMNLP, pages 2979-2984.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Generalizing Procrustes analysis for better bilingual dictionary induction", |
|
"authors": [ |
|
{ |
|
"first": "Yova", |
|
"middle": [], |
|
"last": "Kementchedjhieva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "211--220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yova Kementchedjhieva, Sebastian Ruder, Ryan Cot- terell, and Anders S\u00f8gaard. 2018. Generalizing Pro- crustes analysis for better bilingual dictionary induc- tion. In Proceedings of CoNLL, pages 211-220.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Inducing crosslingual distributed representations of words", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Binod", |
|
"middle": [], |
|
"last": "Bhattarai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1459--1474", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representa- tions of words. In Proceedings of COLING, pages 1459-1474.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Phrase-based & neural unsupervised machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5039--5049", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of EMNLP, pages 5039- 5049.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Exploring bilingual word embeddings for Hiligaynon, a low-resource language", |
|
"authors": [ |
|
{ |
|
"first": "Leah", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viktor", |
|
"middle": [], |
|
"last": "Hangya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2573--2580", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leah Michel, Viktor Hangya, and Alexander Fraser. 2020. Exploring bilingual word embeddings for Hiligaynon, a low-resource language. In Proceed- ings of LREC, pages 2573-2580.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Exploiting similarities among languages for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Tom\u00e1s", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom\u00e1s Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improving statistical machine translation for a resource-poor language using related resource-rich languages", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "44", |
|
"issue": "", |
|
"pages": "179--222", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov and Hwee Tou Ng. 2012. Improving sta- tistical machine translation for a resource-poor lan- guage using related resource-rich languages. Jour- nal of Artificial Intelligence Research, 44:179-222.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Transfer learning across low-resource, related languages for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Toan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of IJC-NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "296--301", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proceedings of IJC- NLP, pages 296-301.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Analyzing the limitations of cross-lingual word embedding mappings", |
|
"authors": [ |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Ormazabal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4990--4995", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the lim- itations of cross-lingual word embedding mappings. In Proceedings of ACL, pages 4990-4995.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces", |
|
"authors": [ |
|
{ |
|
"first": "Barun", |
|
"middle": [], |
|
"last": "Patra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [ |
|
"Ruben" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antony", |
|
"middle": [], |
|
"last": "Moniz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Garg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "184--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In Proceedings of ACL, pages 184-193.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Software framework for topic modelling with large corpora", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Radim\u0159eh\u016f\u0159ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In Pro- ceedings of the LREC 2010 Workshop on New Chal- lenges for NLP Frameworks, pages 45-50.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ori", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Globerson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1599--1613", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of con- textual word embeddings, with applications to zero- shot dependency parsing. In Proceedings of NAACL: HLT, pages 1599-1613.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Turban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nils", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Hamblin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hammerla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "On the limitations of unsupervised bilingual dictionary induction", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "778--788", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of ACL, pages 778-788.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Do we really need fully unsupervised cross-lingual embeddings?", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of EMNLP-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4407--4418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Vuli\u0107, Goran Glava\u0161, Roi Reichart, and Anna Ko- rhonen. 2019. Do we really need fully unsuper- vised cross-lingual embeddings? In Proceedings of EMNLP-IJCNLP, pages 4407-4418.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "On the role of seed lexicons in learning bilingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "247--257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of ACL, pages 247-257.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Cross-lingual alignment vs joint training: A comparative study and a simple unified framework", |
|
"authors": [ |
|
{ |
|
"first": "Zirui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiateng", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruochen", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime G. Carbonell. 2020. Cross-lingual alignment vs joint training: A com- parative study and a simple unified framework. In Proceedings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Distributed word representation learning for cross-lingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhong", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "119--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of CoNLL, pages 119-129.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Adversarial training for unsupervised bilingual lexicon induction", |
|
"authors": [ |
|
{ |
|
"first": "Meng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huanbo", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1959--1970", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of ACL, pages 1959-1970.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "The Occitan-speaking area and its dialects.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Number of embeddings resulting from joint-align training with Occitan and a related language.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Corpora and vocabulary sizes of the extracted Wikipedia corpora (in millions).", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"5\">Occitan French Spanish Catalan</td></tr><tr><td>Tokens</td><td colspan=\"3\">15.00 985.38</td><td colspan=\"2\">745.46 246.07</td></tr><tr><td>Types</td><td/><td>0.50</td><td>4.89</td><td>4.14</td><td>2.35</td></tr><tr><td/><td/><td>Oc/Fr</td><td/><td>Oc/Es</td><td>Oc/Ca</td></tr><tr><td colspan=\"2\">Types overall</td><td>5.08</td><td/><td>4.36</td><td>2.57</td></tr><tr><td colspan=\"2\">Types shared</td><td colspan=\"2\">0.31 (6.10%)</td><td colspan=\"2\">0.28 (6.42%) (10.89%) 0.28</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Vocabulary sizes of the joint corpora (in millions). 'Types shared' indicates the number of shared words among the two languages; the percentage of shared words per corpus is reported in parentheses.", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Number of word pairs in our bilingual dictionaries (number of unique source words in parentheses).", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "No. Model Src. emb. Train dict.", |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td/><td/><td>P@1</td><td>P@10</td></tr><tr><td colspan=\"2\">a MUSE b</td><td>Oc</td><td>oc-en id char</td><td>15.47 15.91</td><td>31.05 30.94</td></tr><tr><td>c d</td><td>GPA</td><td>Oc</td><td>oc-en id char</td><td>15.69 15.91</td><td>32.38 31.71</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Results for English \u2192 Occitan BLI.", |
|
"num": null, |
|
"content": "<table><tr><td>(Parame-</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |