{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:04.460391Z" }, "title": "Overview of the Fourth BUCC Shared Task: Bilingual Dictionary Induction from Comparable Corpora", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Mainz", "location": {} }, "email": "reinhardrapp@gmx.de" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Mainz", "location": {} }, "email": "" }, { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Mainz", "location": {} }, "email": "s.sharoff@leeds.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The shared task of the 13th Workshop on Building and Using Comparable Corpora was devoted to the induction of bilingual dictionaries from comparable rather than parallel corpora. In this task, for a number of language pairs involving Chinese, English, French, German, Russian and Spanish, the participants were asked to determine automatically the target language translations of several thousand source language test words in three frequency ranges. We describe here some background, the task definition, the training and test data sets and the evaluation used for ranking the participating systems. We also summarize the approaches used and present the results of the evaluation. In conclusion, the outcome of the competition is the results of a number of systems which provide surprisingly good solutions to an ambitious problem.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The shared task of the 13th Workshop on Building and Using Comparable Corpora was devoted to the induction of bilingual dictionaries from comparable rather than parallel corpora. In this task, for a number of language pairs involving Chinese, English, French, German, Russian and Spanish, the participants were asked to determine automatically the target language translations of several thousand source language test words in three frequency ranges. We describe here some background, the task definition, the training and test data sets and the evaluation used for ranking the participating systems. We also summarize the approaches used and present the results of the evaluation. In conclusion, the outcome of the competition is the results of a number of systems which provide surprisingly good solutions to an ambitious problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the framework of machine translation, the extraction of bilingual dictionaries from parallel corpora has been conducted very successfully (see e.g. Mihalcea & Pedersen, 2003) . But on the other hand, human second language acquisition appears not to be based on parallel data. This means that there must be a way of acquiring and relating lexical knowledge across two or more languages without the use of parallel data.", "cite_spans": [ { "start": 151, "end": 177, "text": "Mihalcea & Pedersen, 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "It has been suggested that it may be possible to extract multilingual lexical knowledge from comparable rather than from parallel corpora (see e.g. . From a theoretical perspective, this suggestion may lead to advances in understanding human second language acquisition. From a practical perspective, as comparable corpora are available in much larger quantities than parallel corpora, this approach might help in relieving the data acquisition bottleneck which tends to be especially severe when dealing with language pairs involving low resource languages (see e.g. Martin et al., 2005) .", "cite_spans": [ { "start": 568, "end": 588, "text": "Martin et al., 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A well-established practical task to approach this topic is bilingual lexicon extraction from comparable corpora, which is in the focus of this shared task. Typically, its aim is to extract word translations such as exemplified in Table 1 from comparable corpora, where a given source word may receive multiple translations. Note that, to reflect the tabular format used in the shared task, multiple translations of the same source word are listed in separate rows.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 238, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Quite a few research groups have been working on this problem using a wide variety of approaches. There are comprehensive studies such as Irvine & Callison-Burch (2017) and also overview papers at least in part discussing the topic like Jakubina & Langlais (2016) , Rapp et al. (2016) , Table 1 : Sample word translations from English to French. In the shared task a similar tab-separated format was used.", "cite_spans": [ { "start": 138, "end": 168, "text": "Irvine & Callison-Burch (2017)", "ref_id": "BIBREF5" }, { "start": 237, "end": 263, "text": "Jakubina & Langlais (2016)", "ref_id": "BIBREF6" }, { "start": 266, "end": 284, "text": "Rapp et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "However, as up to now there was no standard way to measure the performance of the systems, the published results are not comparable and the pros and cons of the various approaches are not clear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The present shared task 1 aimed at solving these problems by organizing a fair competition between systems. This was accomplished by providing corpora and bilingual datasets for a number of language pairs involving Chinese, English, French, German, Russian and Spanish, and by comparing the results using a common evaluation framework. For the shared task we provided corpora as well as training and test data. However, as we anticipated that these corpora and datasets may not suit all needs, we divided the shared task into two tracks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Task Description", "sec_num": "2." }, { "text": "\uf0b7 In the closed track, participants were required to only use the data provided by the organizers. In this way equal conditions were ensured and, as the outcome of this track, the systems could be compared and ranked according to the quality of their results. \uf0b7 In the open track, participants were free to use their own corpora and training data. If possible, they were supposed to still use the evaluation data provided in the closed track, but this was also not mandatory. The participants could even work on languages for which the shared task provided no data. If relevant, the participants were supposed to describe why their systems were not suitable for the closed track, and discuss the pros and cons of their choices. They were also encouraged to provide access to their data for the purpose of facilitating replication by others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Task Description", "sec_num": "2." }, { "text": "To give an overview on the steps to be conducted by the participating teams, Table 2 provides a checklist for the participants in an abbreviated form. The time schedule is shown in Table 3 . With about three weeks, the time span between the release of the test sets and the submission of the final results was (in comparison to most other shared tasks) foreseen to be relatively long for the reason that some teams worked on more language pairs than others and would have been at a disadvantage if this time span had been a limiting factor (but it probably still was to some extent).", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 181, "end": 188, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Shared Task Description", "sec_num": "2." }, { "text": "Decide on the track and the language pairs. \uf0b7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7", "sec_num": null }, { "text": "Express your interest to the shared task organizers. You may also suggest new language pairs, and we might be able to help you with data. \uf0b7 Download the corpora from the shared task webpage (WaCky or Wikipedia) \uf0b7 Download the training data (bilingual word pairs) for your language pairs from the shared task webpage. \uf0b7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7", "sec_num": null }, { "text": "Run your system on the words on the source side of the training data and compute the translations. Compare your results with target side of the training data and improve your system if necessary. \uf0b7 Download the test data on the date specified in the time schedule. \uf0b7 Run your system on the test data. Format your output in the same way as you see in the training data. \uf0b7 Before the deadline specified in the schedule, submit your results. \uf0b7 Write and submit a system description paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7", "sec_num": null }, { "text": "\uf0b7 Present your paper at the workshop. Table 4 lists the corpora to be used for the language pairs supported in the closed track. Due to their free availability for several languages and their size, for the shared task we used the WaCky-corpora kindly provided by the Web-as-acorpus initiative 2 (Baroni et al., 2009) and cleaned-up versions of Wikipedia dumps.", "cite_spans": [ { "start": 295, "end": 316, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "\uf0b7", "sec_num": null }, { "text": "The cells in Table 4 show which of the two types of corpora were supposed to be used for the two languages of a language pair when conducting the dictionary induction task. The rationale behind these choices is that the WaCky corpora, with a greater variety of topics and genres, seem somewhat better suited for the dictionary induction task The WaCky corpora are cleaned-up web crawls. Their compressed sizes are: English: 3.2 GB, French: 3.0 GB, German; 3.0 GB, Russian: 4.1 GB. English, French, and German are supposed to comprise in the order of 2 billion, Russian about 3 billion running words (Sharoff et al., 2017) .", "cite_spans": [ { "start": 599, "end": 621, "text": "(Sharoff et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "The compressed sizes of the Wikipedia corpora are: English: 3.6 GB, Spanish: 0.9 GB, Chinese: 0.4 GB. They are in a one-line per document format. The first tabseparated field in each line contains metadata, the second field contains the text. Paragraph boundaries are marked with HTML tags. As cleaning up the original Wikipedia dump files is not trivial, occasionally there can be some noise in the form of not fully cleaned HTML and Javascript fragments. Details of the cleanup and preparation procedure can be found in Sharoff et al. (2015) .", "cite_spans": [ { "start": 522, "end": 543, "text": "Sharoff et al. (2015)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "For the convenience of the shared task participants, we provided pre-trained fastText embeddings for all WaCky and Wikipedia corpora listed in Table 4 . They were trained on the Wikipedia or WaCky corpora and were allowed to be readily used in both tracks.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Embeddings", "sec_num": "3.2" }, { "text": "The fastText embeddings for the Wikipedia corpora were taken from Facebook AI Research (Bojanowsky et al., 2017). 3 For the WaCky-corpora, pre-trained fastText embeddings were computed and made available by Serge Sharoff as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "3.2" }, { "text": "\uf0b7 The .vec.xz files are text representations, widely used in various tools. \uf0b7 The .bin files are binary versions for use in fastText. \uf0b7 The following parameters were used: method: skipgram; minCount: 30; dim: 300; ws (context window): 7; epochs: 10; neg (number of negatives sampled): 10. The other parameters are the defaults for fastText.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "3.2" }, { "text": "For training and testing the systems, reasonable numbers of bilingual word pairs as exemplified in Table 1 had to be provided for the language pairs listed in Table 4 . Alexis Conneau from Facebook AI Research kindly gave us permission to use for the shared task extracts from the MUSE \"Ground-truth bilingual dictionaries\" 4 as described in Conneau et al. (2017) . In this paper, the authors describe their data as follows: \"Word translation The task considers the problem of retrieving the translation of given source words. The problem with most available bilingual dictionaries is that they are generated using online tools like Google Translate, and do not take into account the polysemy of words. Failing to capture word polysemy in the vocabulary leads to a wrong evaluation of the quality of the word embedding space. Other dictionaries are generated using phrase tables of machine translation systems, but they are very noisy or trained on relatively small parallel corpora. For this task, we create high-quality dictionaries of up to 100k pairs of words using an internal translation tool to alleviate this issue. We make these dictionaries publicly available as part of the MUSE library\"", "cite_spans": [ { "start": 342, "end": 363, "text": "Conneau et al. (2017)", "ref_id": null } ], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 1", "ref_id": null }, { "start": 159, "end": 166, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Training and test datasets", "sec_num": "3.3" }, { "text": "To us, the MUSE data on word translations looks like being derived from word-aligned parallel corpora by filtering out infrequent and therefore less reliable translations of a source language word. In particular, as it seems that for each source language word at most five possible translations are provided, it appears that only those target language translations which are aligned to at least 20% of the occurrences of a given source language word are listed. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and test datasets", "sec_num": "3.3" }, { "text": "For more than 100 language pairs, the MUSE data lists such word translations. The lists use UTF-8 encoding and lower case characters only. Apparently, they are sorted by descending corpus frequencies of the source language words. As an example, Table 5 shows the top 40 lines of the list for English-German. For some language pairs, blanks are used as separators between source word and translation, but tabs for others. Although this is not applicable to the current shared task, to provide for future extensions to multiword units, we unified this to tabs. Table 5 : Top 40 translations from the English to German MUSE word translation data. Table 6 gives, in alphabetical order according to ISOlanguage codes, 6 an overview of the number of bilingual word pairs (lines in the files) provided for each of the language pairs in the MUSE word translation data. 7 As can be seen in column Lines, this number varies between 20549 (ko-en) and 113324 (fr-en). However, as many source language words have several translations, the number of unique source language words (word types) is smaller. Column Types shows that this number varies between 13727 (ko-en) and 106473 (es-pt). Comparing the two columns gives an idea of the average number of translations for each source language word of a language pair.", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 252, "text": "Table 5", "ref_id": null }, { "start": 559, "end": 566, "text": "Table 5", "ref_id": null }, { "start": 644, "end": 651, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Training and test datasets", "sec_num": "3.3" }, { "text": "Rather than providing one large set of training data for each language pair, by splitting into three frequency ranges we provide three equally-sized smaller sets per language pairs. Looking at different frequency ranges is of scientific interest as algorithms typically work best for high or medium frequency words, whereas the performance at low frequencies is often of higher practical relevance. The ratio between lines and types can be seen as a measure of the average fertility (number of translations) of the source language words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German", "sec_num": null }, { "text": "We split the data into three parts corresponding to frequency ranges of the source language words: The high frequency range provides bilingual word pairs where the frequency is among the 5000 most frequent words in the MUSE data. The mid frequency range consists of words ranking between 5001 and 20000, and the low frequency range belongs to ranks 20001 to 50000. However, for languages where the MUSE data comprises less than 50000 unique source language words (see Table 6 ), we had to reduce these thresholds. For en-ru and ru-en the thresholds were set to 5000, 20000 and 40000. For en-zh they are at 5000, 15000 and 30000, and for zh-en they are at 4500, 9000 and 13500.", "cite_spans": [], "ref_spans": [ { "start": 468, "end": 475, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Lang", "sec_num": null }, { "text": "From these ranges we extracted (pseudo) random samples which we call bins. Each bin comprises 2000 unique source language words together with all their translations. Like in the original MUSE data, also in the bins the source language words are ordered according to frequency (most frequent first). All three sets (per language pair) taken together, this gives 6000 unique source language words together with their translations, whereby, as shown in Table 5 , each possible translation is listed in a separate line along with the source language word.", "cite_spans": [], "ref_spans": [ { "start": 450, "end": 457, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Lang", "sec_num": null }, { "text": "Given large datasets and an ambitious shared task schedule, we did not have the time to manually correct the data files. However, although the MUSE dictionaries were apparently generated automatically, they seem mostly of reasonably good quality, with only few errors. An exception is the low frequency range of English-Chinese where almost all source language words are translated by identical target language words which is not very useful. We encouraged the participants of the shared task to report to us such errors so that, as a positive side effect of the shared task, information for the improvement of the datasets was collected. For details, see the system description papers of the shared task participants in this volume.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lang", "sec_num": null }, { "text": "For testing the systems, lists of source language test words were provided which, based on word frequency, were likewise split into three sets of 2000 unique words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lang", "sec_num": null }, { "text": "We had informed the participants that if their algorithms required a seed lexicon, they should use an arbitrary part of the training data for this purpose. Our hope was that with its 6000 source language words and even more translation pairs, the training set was large enough to provide for the participants' needs. If not, participants were referred to the open track of the shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lang", "sec_num": null }, { "text": "In this track, participants were free to work on other language pairs, use their own data and, if desired, use their own evaluation procedures. They were encouraged to describe in their papers the reasons and motivation for deviating from the procedures of the closed track and, if possible, to provide access to their data. We also indicated that we might be able to give support for other language pairs by providing cleaned-up Wikipedia corpora and datasets of word translations extracted from MUSE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Open Track", "sec_num": "4." }, { "text": "Note that the limited choice of language pairs in the closed track was deliberate in order not to scatter participation over too many languages with the consequence of making comparisons between systems difficult. But in principle we were prepared to offer support for all language pairs covered by the MUSE dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Open Track", "sec_num": "4." }, { "text": "As this appears to be the first shared task on the topic of dictionary induction from comparable corpora, we could not draw on previous experiences. Due to this pilot character, in Track 1 we were trying to keep things as clear and unsophisticated as possible. But in Track 2 we encouraged participants to challenge this simplicity, to freely experiment and to come up with new ideas in the hope that the resulting insights will promote future progress in the field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Open Track", "sec_num": "4." }, { "text": "Despite the ambitious schedule of the shared task, four teams managed to submit their results in time. These teams and the tracks and language pairs they worked on are listed in Table 7 . As cited in the table, the first three teams have system description papers in this volume, which is why we only briefly describe their approaches here. (Laville et al., 2020) closed track: de-en, en-de, de-fr, fr-de, en-es, es-en, en-fr, fren SW Sida Wang 8 closed track: en-zh, zh-en Table 7 : Participating teams and their tracks and language pairs.", "cite_spans": [ { "start": 341, "end": 363, "text": "(Laville et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 178, "end": 185, "text": "Table 7", "ref_id": null }, { "start": 474, "end": 481, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Participants and Systems", "sec_num": "5." }, { "text": "The LMU team relies on bilingual word embeddings which they claim to be effective in low resource settings. However, as they typically do not perform well on low frequency words, the embeddings are supplemented utilizing word surface similarity such as orthography and transliteration information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "The LS2N team combines a word embedding approach with a concatenation approach based on Tomas Mikolov's well known Word2vec 9 system together with a cognates matching approach based on string similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "The CEN team puts an emphasis on the transfer learning of semantics based on cross-lingual embeddings. For this purpose they experiment with different approaches, such as Word2Vec, Multilayer Perceptrons and Convolutional Neural Networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "Sida Wang described his system as follows: 10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "\"1) The system does not use the training data for training, instead it uses identical mappings as initialization and uses the training set as a validation set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "2) An iterative procedure is used to figure out as much of the vocabulary as possible, independent of what is needed in the output (i.e. independent of the test set) 2a) I used the supervised rotation method where nearest neighbors (corrected with CSLS) are predicted as translations 2b) The iterative procedure adds (s,t) if t \u20ac top_k(s) and t \u20ac top k(t) where a k of 2 did the best on the validation set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "3) My implementation is based on vecmap (https:// github.com/artetxem/vecmap) but I only used a supervised procedure and a different iterative procedure as described above\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short", "sec_num": null }, { "text": "For evaluation, participants of the closed track (for the open track this was optional) were asked to provide their results on the test data sets for the test words in each of the three frequency ranges. Hereby it was expected that for each source language word all its major translations were provided (whereby the definition of \"major\" was supposed to be inferred from the training data). These translations were compared to the translations as found in the (internal) gold standard data which is structurally similar to the training data as it was randomly sampled from the same MUSE data in the same three frequency ranges. Only identical strings were considered correct, and the performance of a system was determined by computing precision (P), recall (R), and F1-score, the latter being the official score for system ranking. All data sets are in UTF-8 encoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "More precisely: the input to the system is a list of source language words, one per line. A system was supposed to return, for each input word one or more candidate translations, in the form of tab-separated word pairs, each on its own line. For instance, in the English-French case, given the gold standard, test word list, and system output as shown in Table 8 , the system would get credited for two true positives, one false positive, and two false negatives, hence P = 2 / 3 = 0.67 R = 2 / 4 = 0.50 F1 = 2 (P * R) / (P + R) = 0.57 10 E-mail to shared task organizers (May 2, 2020). Table 9 shows some pseudo-code for computing these scores in a very simple and efficient way. The implementation can be conducted using standard UNIX commands such as sort and wc. Procedure: A = number of lines in file with system output B = number of lines in file with gold standard data C = A + B Merge both input files Conduct unique sort of the lines in the merged file D = number of lines in uniquely sorted file NoMatches = C -D R = NoMatches / B P = NoMatches / A F1 = 2 * (P * R) / (P + R) Table 9 : Pseudo code for computing recall, precision and F1-score. Table 10 show the participating teams' results for the closed track. These are overall results not considering the frequency bins, i.e. when the data from the three frequency bins are merged for the gold standard data and also for the system output data. ", "cite_spans": [], "ref_spans": [ { "start": 355, "end": 362, "text": "Table 8", "ref_id": "TABREF9" }, { "start": 587, "end": 594, "text": "Table 9", "ref_id": null }, { "start": 1086, "end": 1093, "text": "Table 9", "ref_id": null }, { "start": 1154, "end": 1162, "text": "Table 10", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "6." }, { "text": "Tables 12 to 15 show the teams' results when the high/mid/ low frequency bins are distinguished. Again, no evaluation was conducted for CEN's ta-en (Tamil-English) language pair. Given the difficulty of the task where the teams not only had to rank candidates but also had to precisely decide which ones to keep and which ones to discard, we found the best results surprisingly good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results when considering frequency bins", "sec_num": "7.2" }, { "text": "Concerning the frequencies of the source language words, often the results get better with lower frequencies, showing that the methods are quite good in dealing with sparse data. Only the low frequency words of the language pair zh-en, with an astonishing F1-score of 0.852, benefits from an idiosyncrasy of the MUSE data: Here almost all items consist of identical strings on the source and target language sides, which is particularly beneficial for the approach used by Sida Wang (see section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results when considering frequency bins", "sec_num": "7.2" }, { "text": "Closed track by frequency La ng. Team high freq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results when considering frequency bins", "sec_num": "7.2" }, { "text": "mid freq. low freq. R P F1 R P F1 R P F1 deen CEN 9.0 4.0 5.5 15.0 4.9 7.4 27.0 6.6 10. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results when considering frequency bins", "sec_num": "7.2" }, { "text": "The fourth BUCC shared task addressed the extraction of bilingual dictionaries from comparable corpora. This is a difficult task as, in contrast to parallel corpora, in this case it is not clear how to bridge the gap between languages. Nevertheless, the best participating systems achieved consistently good results for a number of language pairs involving languages from related as well as from very distant languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Outlook", "sec_num": "8." }, { "text": "Of course, the provided datasets were not perfect: They were based on the automatically created MUSE dictionaries and, due to their considerable sizes, not manually checked. For each of 28 language pairs they comprised 12000 unique source language words (6000 for the training sets and another 6000 for the test sets) with somewhat more translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Outlook", "sec_num": "8." }, { "text": "Challenges of interest for future shared tasks on bilingual lexicon induction from comparable corpora include: 1) Finding mappings across the full set of inflected forms of two languages. For example, adequate in English maps to four cognate forms in Spanish: adecuado, adecuada, adecuados, adecuadas, corresponding to the choices of singular vs. plural and feminine vs. masculine, because the English adjectives do not inflect for number and gender. The gold standard we used in the current shared task did not necessarily include the full range of forms. 2) Another issue concerns the representation of word senses in the test set. Since the gold standard translations were extracted from parallel corpora, as word selection in the target language is biased by the words in the source language, their set is likely to be different from what is available in general comparable corpora, such as the WaCky corpora and Wikipedia. For example, translations of strong voice extracted from the Europarl corpus primarily include references to expressions of opinions rather than assessments of the vocal cord. Translations also exhibit a cline from clear homonymy for words like bank to clear polysemy for words like heavy in which the same sense can be translated slightly differently depending on the context heavy luggage, heavy blow, heavy rain. More research is needed into what is the range of polysemous translations in the available test datasets. 3) In preparing data for this shared task we used information about the frequencies of words, as highly frequent words exhibit different translation properties from low frequent words. However, the test lexicon contains other sources of variation, which are worth a separate investigation, such as common names, borrowings or proper names. For example, borrowed proper names have sometimes trivial translations, e.g. Kazimierz maps to itself in the English to French evaluation set. 4) A particularly relevant topic is multiword expressions which are omnipresent in specialized language. We did not address them at all here, but this should certainly be a fruitful direction of research in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Outlook", "sec_num": "8." }, { "text": "https://fasttext.cc/docs/en/pretrained-vectors.html 4 https://github.com/facebookresearch/MUSE 5 We are extrapolating from what we did ourselves in the previous COMTRANS project, which, however, covered only a few language pairs (https://cordis.europa.eu/project/id/23845) 6 https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As of May 2020, the MUSE website lists dictionaries for 110 language pairs (see https://github.com/facebookresearch/MUSE). However, there is a double occurrence of the en-en file (identical files with the same English words on the source and the target side). We list this file only once in our table which is why we have only 109 items inTable 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.sidaw.xyz/, https://www.linkedin.com/in/sidaw 9 https://en.wikipedia.org/wiki/Word2vec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Normal font: Results based on overall file (no distinction of frequency bins) as provided by team. Italics: Results from merged high/mid/low-frequency bins. Bins provided by team.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Alexis Conneau from Facebook AI Research for allowing us to use extracts of the MUSE word translation data for the shared task and the Web-as-acorpus initiative for providing the WaCky-corpora.This work was partially funded by the Marie Curie Career Integration Grant MULTILEX within the 7th European Community Framework Programme and by the Marie Curie Individual Fellowship SEBAMAT within the European Commission's Horizon 2020 Framework Programme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "9." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The WaCky wide web: A collection of very large linguistically processed web-crawled corpora", "authors": [ { "first": "Marco", "middle": [ ";" ], "last": "Baroni", "suffix": "" }, { "first": "", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": ";", "middle": [], "last": "Silvia", "suffix": "" }, { "first": "", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": ";", "middle": [], "last": "Adriano", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "Language Resources and Evaluation", "volume": "43", "issue": "3", "pages": "209--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, Marco; Bernardini, Silvia; Ferraresi, Adriano; Zan- chetta, Eros (2009). The WaCky wide web: A collection of very large linguistically processed web-crawled corp- ora. Language Resources and Evaluation 43 (3): 209-22.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, Tomas (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, Vol. 5, 135 -146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word translation without parallel data", "authors": [ { "first": "Ludovic", "middle": [ ";" ], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.04087" ] }, "num": null, "urls": [], "raw_text": "Denoyer, Ludovic; J\u00e9gou, Herv\u00e9 (2017). Word translation without parallel data. arXiv preprint arXiv: 1710.04087 (published at ICLR 2018).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A comprehensive analysis of bilingual lexicon induction", "authors": [ { "first": "Ann", "middle": [ ";" ], "last": "Irvine", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "2", "pages": "273--310", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irvine, Ann; Callison-Burch, Chris (2017). A compre- hensive analysis of bilingual lexicon induction. Computational Linguistics 43 (2), 273-310.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A comparison of methods for identifying the translation of words in a comparable corpus: recipes and limits", "authors": [ { "first": "Laurent", "middle": [ ";" ], "last": "Jakubina", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2016, "venue": "Computaci\u00f3n y Sistemes", "volume": "20", "issue": "3", "pages": "449--458", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakubina, Laurent; Langlais, Philippe (2016). A compari- son of methods for identifying the translation of words in a comparable corpus: recipes and limits. Computaci\u00f3n y Sistemes 20 (3), 449-458.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "TALN/LS2N participation at the BUCC shared task: bilingual dictionary induction from comparable corpora. Proceedings of the 13th Workshop on Building and Using Comparable Corpora", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "TALN/LS2N participation at the BUCC shared task: bilingual dictionary induction from comparable corpora. Proceedings of the 13th Workshop on Building and Using Comparable Corpora.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Word alignment for languages with scarce resources", "authors": [ { "first": "Joel", "middle": [ ";" ], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": ";", "middle": [], "last": "Rada", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, Joel; Mihalcea, Rada; Pedersen, Ted (2005). Word alignment for languages with scarce resources. Proceed- ings of the ACL Workshop on Building and Using Parallel Texts.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An evaluation exercise for word alignment", "authors": [ { "first": "Rada", "middle": [ ";" ], "last": "Mihalcea", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea, Rada; Pedersen, Ted (2003). An evaluation exercise for word alignment. Proceedings of the HLT- NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Recent advances in machine translation using comparable corpora", "authors": [ { "first": "Reinhard", "middle": [ ";" ], "last": "Rapp", "suffix": "" }, { "first": "Serge", "middle": [ ";" ], "last": "Sharoff", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2016, "venue": "Journal of Natural Language Engineering", "volume": "22", "issue": "4", "pages": "501--516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rapp, Reinhard; Sharoff, Serge; Zweigenbaum, Pierre (2016). Recent advances in machine translation using comparable corpora. Journal of Natural Language Engineering 22 (4), 501-516.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "BUCC 2020: bilingual dictionary induction using crosslingual embedding", "authors": [ { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soman, KP (2020). BUCC 2020: bilingual dictionary induction using cross- lingual embedding. Proceedings of the 13th Workshop on Building and Using Comparable Corpora.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "LMU bilingual dictionary induction system with word surface similarity scores for BUCC 2020", "authors": [ { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00fctze, Hinrich (2020). LMU bilingual dictionary induction system with word surface similarity scores for BUCC 2020. Proceedings of the 13th Workshop on Building and Using Comparable Corpora.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Frequency Dictionary: Russian. Leipziger Universit\u00e4tsverlag", "authors": [ { "first": "Serge", "middle": [ ";" ], "last": "Sharoff", "suffix": "" }, { "first": "Dirk", "middle": [ ";" ], "last": "Goldhahn", "suffix": "" }, { "first": "Uwe", "middle": [], "last": "Quasthoff", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharoff, Serge; Goldhahn, Dirk; Quasthoff, Uwe (2017). Frequency Dictionary: Russian. Leipziger Universi- t\u00e4tsverlag. http://corpus.leeds.ac.uk/serge/publications/ 2017-russian-frq-leipzig.pdf", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Overviewing important aspects of the last twenty years of research in comparable corpora", "authors": [ { "first": "Serge", "middle": [ ";" ], "last": "Sharoff", "suffix": "" }, { "first": "Reinhard", "middle": [ ";" ], "last": "Rapp", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2013, "venue": "Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharoff, Serge; Rapp, Reinhard; Zweigenbaum, Pierre (2013). Overviewing important aspects of the last twenty years of research in comparable corpora. In: Serge Sharoff, Reinhard Rapp, Pierre Zweigenbaum, Pascale Fung (eds.): Building and Using Comparable Corpora. Heidelberg: Springer, 1-18.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Overviewing important aspects of the last twenty years of research in comparable corpora", "authors": [ { "first": "Serge", "middle": [ ";" ], "last": "Sharoff", "suffix": "" }, { "first": "Reinhard", "middle": [ ";" ], "last": "Rapp", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2013, "venue": "Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharoff, Serge; Rapp, Reinhard; Zweigenbaum, Pierre (2013). Overviewing important aspects of the last twenty years of research in comparable corpora. In: Serge Sharoff, Reinhard Rapp, Pierre Zweigenbaum, Pascale Fung (eds.): Building and Using Comparable Corpora. Heidelberg: Springer, 1-18.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "BUCC shared task: cross-language document similarity", "authors": [ { "first": "Serge", "middle": [ ";" ], "last": "Sharoff", "suffix": "" }, { "first": "", "middle": [], "last": "Zweigenbaum", "suffix": "" }, { "first": ";", "middle": [], "last": "Pierre", "suffix": "" }, { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Eighth Workshop on Building and Using Comparable Corpora, Beijing, China. ACL Anthology", "volume": "", "issue": "", "pages": "74--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharoff, Serge; Zweigenbaum, Pierre; Rapp, Reinhard (2015). BUCC shared task: cross-language document similarity. Proceedings of the Eighth Workshop on Building and Using Comparable Corpora, Beijing, China. ACL Anthology, 74-78, http://www.aclweb.org/ anthology/W15-3411.pdf", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "text": "https://comparable.limsi.fr/bucc2020/bucc2020-task.html", "content": "
Source (English) Target (French)
babyb\u00e9b\u00e9
babypoupon
bathbain
bedlit
bedplumard
conveniencecommodit\u00e9
doctorm\u00e9decin
doctordocteur
eagleaigle
mountainmontagne
nervousnerveux
worktravail
).
", "num": null, "type_str": "table" }, "TABREF1": { "html": null, "text": "Checklist for participants (abbreviated).", "content": "
Any timeExpressions of interest to participate in
the shared task
January 12, 2020Release of shared task training sets
February 16, 2020 Release of shared task test sets
March 5, 2020Submission of shared task results
", "num": null, "type_str": "table" }, "TABREF2": { "html": null, "text": "Time schedule.", "content": "", "num": null, "type_str": "table" }, "TABREF4": { "html": null, "text": "", "content": "
: Language pairs supported and corpora (WaCky or
Wikipedia) to be used in the closed track.
", "num": null, "type_str": "table" }, "TABREF7": { "html": null, "text": "", "content": "", "num": null, "type_str": "table" }, "TABREF9": { "html": null, "text": "", "content": "
: Sample gold standard, test word list and system
output for the English-French case.
Inputs:
File with system output
File with gold standard data
Assumptions:
Tab-separated word pairs in both files (as in Table 1)
Only unique lines in both files (no repetitions)
", "num": null, "type_str": "table" }, "TABREF10": { "html": null, "text": "shows analogous data for the open track. No evaluation was conducted for CEN's ta-en (Tamil-English) language pair as we had not provided a test set for this.", "content": "
Overall results closed track
Lang.TeamRPF1
CEN15.35.27.7
de-enLMU48.761.654.4
LS2N57.566.261.5
en-deLMU LS2N40.2 54.359.8 54.848.1 54.5
LMU33.937.835.8
en-ruLS2N 1132.638.735.4
37.830.733.9
ru-enLMU LS2N43.9 35.556.7 56.749.5 43.7
de-frLS2N76.876.776.8
fr-deLS2N78.364.971.0
en-esLS2N63.861.462.6
es-enLS2N67.575.171.1
en-frLS2N61.269.765.1
fr-enLS2N46.064.653.7
en-zhSW45.354.649.5
zh-enSW33.640.936.9
Table 10: Overall results for the closed track.
Overall results open track
Lang.TeamRPF1
de-enLMU50.663.856.4
en-deLMU41.161.149.2
en-ruLMU39.343.841.4
ru-enLMU50.765.457.1
Table 11: Overall results for the open track.
", "num": null, "type_str": "table" }, "TABREF11": { "html": null, "text": "6 LMU 44.7 49.1 46.8 43.4 70.9 53.8 62.8 77.1 69.2 LS2N 48.1 63.7 54.8 59.0 63.0 60.9 72.2 73.3 72.Results by frequency for the closed track for language pairs where only LS2N participated. Results by frequency for the closed track for language pairs where only SW participated. Results by frequency for the open track for language pairs where only LMU participated.", "content": "
8
", "num": null, "type_str": "table" } } } }