{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:16.818387Z" }, "title": "BERT Cannot Align Characters", "authors": [ { "first": "Antonis", "middle": [], "last": "Maronikolakis", "suffix": "", "affiliation": { "laboratory": "", "institution": "LMU Munich", "location": { "country": "Germany" } }, "email": "" }, { "first": "Philipp", "middle": [], "last": "Dufter", "suffix": "", "affiliation": { "laboratory": "", "institution": "LMU Munich", "location": { "country": "Germany" } }, "email": "philipp@cis.lmu.de" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "", "affiliation": { "laboratory": "", "institution": "LMU Munich", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In previous work, it has been shown that BERT can adequately align cross-lingual sentences on the word level. Here we investigate whether BERT can also operate as a char-level aligner. The languages examined are English, Fake-English, German and Greek. We show that the closer two languages are, the better BERT can align them on the character level. BERT indeed works well in English to Fake-English alignment, but this does not generalize to natural languages to the same extent. Nevertheless, the proximity of two languages does seem to be a factor. English is more related to German than to Greek and this is reflected in how well BERT aligns them; English to German is better than English to Greek. We examine multiple setups and show that the similarity matrices for natural languages show weaker relations the further apart two languages are.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In previous work, it has been shown that BERT can adequately align cross-lingual sentences on the word level. Here we investigate whether BERT can also operate as a char-level aligner. The languages examined are English, Fake-English, German and Greek. We show that the closer two languages are, the better BERT can align them on the character level. BERT indeed works well in English to Fake-English alignment, but this does not generalize to natural languages to the same extent. Nevertheless, the proximity of two languages does seem to be a factor. English is more related to German than to Greek and this is reflected in how well BERT aligns them; English to German is better than English to Greek. We examine multiple setups and show that the similarity matrices for natural languages show weaker relations the further apart two languages are.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For the many sweeping successes BERT has had in the field of Natural Language Processing, the model's alignment capabilities have been lacking and under-explored. Work in this area is picking up and it has been shown that BERT can operate with adequate efficiency in word alignment tasks (Zenkel et al., 2019; Jalili Sabet et al., 2020) . The question whether or not BERT can perform character-level alignment, though, has not been answered yet.", "cite_spans": [ { "start": 288, "end": 309, "text": "(Zenkel et al., 2019;", "ref_id": "BIBREF26" }, { "start": 310, "end": 336, "text": "Jalili Sabet et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Even though characters on their own do not necessarily hold much semantic meaning, we investigate whether BERT is able to generate useful representation spaces for characters. Character-level alignment would be useful in tasks like transliteration (Li et al., 2009; Sajjad et al., 2017) or wordlevel alignment (Legrand et al., 2016) . In a lot of occurrences of transliterations, grammatical inflections are added or dropped, which causes difficulties in an array of tasks (Czarnowska et 2019; Vania and Lopez, 2017) . With characterlevel awareness, we can have better models for transliteration detection and extraction tasks. Word alignment could also benefit from character-level information in instances where words get split up within a sentence (eg., separable verbs in German or phrasal verbs in English). With our work we show that even though BERT is not able to align languages on the character level, the closer these languages are the better the alignment. In the trivial case of English to Fake-English alignment, the model successfully learns to align characters. For English to German performance drops substantially and it drops even more for English to Greek. Languages seem to be put on an intuitive scale by BERT, with more similar languages having better alignment than highly dissimilar ones.", "cite_spans": [ { "start": 248, "end": 265, "text": "(Li et al., 2009;", "ref_id": "BIBREF15" }, { "start": 266, "end": 286, "text": "Sajjad et al., 2017)", "ref_id": "BIBREF20" }, { "start": 310, "end": 332, "text": "(Legrand et al., 2016)", "ref_id": "BIBREF14" }, { "start": 473, "end": 487, "text": "(Czarnowska et", "ref_id": null }, { "start": 494, "end": 516, "text": "Vania and Lopez, 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research has been conducted to uncover elements of multilinguality in mBERT. In Pires et al. (2019) , an analysis of mBERT is presented, while in Conneau and Lample 2019 In Wu et al. (2021) ; Garg et al. (2019) , it is shown that transformers (Vaswani et al., 2017) can achieve similar performance with sequenceto-sequence approaches based on Recurrent Neural Networks (Luong et al., 2015) for characterlevel tasks such as transliteration and graphemeto-phoneme conversion. Further work to develop character-level BERT-based models is conducted in El Boukkouri et al. 2020; Ma et al. (2020) .", "cite_spans": [ { "start": 80, "end": 99, "text": "Pires et al. (2019)", "ref_id": "BIBREF19" }, { "start": 173, "end": 189, "text": "Wu et al. (2021)", "ref_id": "BIBREF24" }, { "start": 192, "end": 210, "text": "Garg et al. (2019)", "ref_id": "BIBREF9" }, { "start": 243, "end": 265, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF23" }, { "start": 369, "end": 389, "text": "(Luong et al., 2015)", "ref_id": "BIBREF16" }, { "start": 574, "end": 590, "text": "Ma et al. (2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Och and Ney 2003; Legrand et al. (2016) worked towards representation-based word alignment, with implementations of aligners proposed in Dyer et al. (2013) ; \u00d6stling and Tiedemann (2016) .", "cite_spans": [ { "start": 18, "end": 39, "text": "Legrand et al. (2016)", "ref_id": "BIBREF14" }, { "start": 137, "end": 155, "text": "Dyer et al. (2013)", "ref_id": "BIBREF7" }, { "start": 158, "end": 186, "text": "\u00d6stling and Tiedemann (2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We also note recent efforts towards unsupervised word alignment. In Zenkel et al. (2019) , an extension to the usual Machine Translation encoderdecoder is proposed to jointly learn language translation and word alignment in an unsupervised manner. BERT has also been shown to be able to perform word alignment (Jalili Sabet et al., 2020) through embedding matrix similarities.", "cite_spans": [ { "start": 68, "end": 88, "text": "Zenkel et al. (2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The EuroParl Corpus (Koehn, 2005 ) is a parallel corpus containing recorded proceedings of the European Parliament. Originally 21 languages were included, although here we examined three: English (ENG), German (DEU) and Greek (ELL). 1 Each set is split by words and each word is further split in characters. Finally, special start and end tokens are added around each (split) word. This process results in a data file where each line contains a word split in characters. Finally, these language sets are merged together, alternating between lines. For example, in the English \u2192 German setup, the first line contains an English word (split in characters), the second line a German word (split in characters), and so on. For the conversion of English to Fake-English, we employ a mapping of characters to integers. The integers are in the range [100, 151] . The same mapping takes place for the English \u2192 German setup, since the two languages share the same script. Data is given as input lineby-line to the model.", "cite_spans": [ { "start": 20, "end": 32, "text": "(Koehn, 2005", "ref_id": "BIBREF13" }, { "start": 843, "end": 848, "text": "[100,", "ref_id": null }, { "start": 849, "end": 853, "text": "151]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data Setup", "sec_num": "3.1" }, { "text": "We experimented with different BERT (Devlin et al., 2019) model sizes and parameters. In our hyperparameter search, we mainly examined the effect of hidden layer size, layer numbers, embedding space size and attention heads number. We found that when models have fewer than 3 layers or more than 9 layers, we see underfitting and overfitting respectively. In the end, we settled for a 6-layer model with a quarter of the original BERT-base parameters, trained for 50 epochs. An analysis of model size effect on performance is omitted. We are training from scratch on the usual Masked Language Modeling task as described in (Devlin et al., 2019) , with the difference that we are masking individual characters instead of subword tokens.", "cite_spans": [ { "start": 36, "end": 57, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 623, "end": 644, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Model Setup", "sec_num": "3.2" }, { "text": "For our control experiment, we tried to align English and Fake-English. This setup serves as aid to hyperparameter tuning. Because of the nature of English and Fake-English, if a model cannot align these two then it would not work for natural languages. English data was split in two sets, with the second converted to Fake-English. In our setup, Fake-English is a simple mapping from English characters to numbers ranging from 100 to 151. That is, 'a' is converted to '100', 'b' to '101', 'A' to '126' etc. All other numbers were removed from both sets. We call this setup EngF ake base .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.3" }, { "text": "Apart from the base setup, we tried minor alterations. Namely, we tried to break the one-to-one mapping from English to Fake-English. The letter 'f' was mapped not to a single, unique integer, but instead two new indices (denoted by f 1 and f 2 ). So, 'f' was mapped to '200 201', which are the unique tokens for f 1 and f 2 respectively. In the same way, capital 'F' was replaced by the tokens for F 1 and F 2 . We call this setup EngF ake f 1 f 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.3" }, { "text": "After the successful EngF ake experiments, we experiment with English to Greek (EngEll) and English to German (EngDeu). For English to German, since the languages share the same script, we converted German to Fake-German in an analogous method to the Fake-English conversion. Finally, German to Greek (DeuEll) experiments were also conducted for completion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.3" }, { "text": "Firstly, we examine the uncontextualized embeddings of characters. To retrieve the representation of a character, we give it as input to the model and extract the embedding layer activations. We also investigate contextualized embeddings by feeding entire words into the model and extracting the activations for a particular character at the 5th layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.3" }, { "text": "For all the setups, a separate development set was used for evaluation by holding out 30% of examples (ie., the lines in the dataset). After training the respective models, we compute the cosine similarity matrix between the two alphabets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.3" }, { "text": "The cosine similarity matrices are shown for our different setups. First, we give as input the characters of the two alphabets separately and extract their first-layer representations. We examined all layer representations, but since the characters are given without context, we decided to go with the first layer, which has been shown to contain contextindependent information (Jawahar et al., 2019) .", "cite_spans": [ { "start": 378, "end": 400, "text": "(Jawahar et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Matrix Comparisons", "sec_num": "4.1" }, { "text": "In Figure 1 (a) the cosine similarity matrix for EngF ake base is presented. The diagonal shows that the model correctly aligns English with the base Fake-English language. In Figure 1 (b) we see the similarity matrix of EngF ake f 1 f 2 . The strong diagonal indicates that BERT indeed manages to align English with its Fake-English equivalent. The added perturbation (f 1 f 2 instead of 'f') is correctly captured by the model as well. The English 'f' has high similarity scores with both f 1 and f 2 , with f 1 having a slightly higher score. We also compare against the combined f 1 f 2 bigram, by computing its joint representation. We denote this new, combined token simply with 'f' in the matrix.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF2" }, { "start": 176, "end": 188, "text": "Figure 1 (b)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Similarity Matrix Comparisons", "sec_num": "4.1" }, { "text": "For EngDeu, in Table 2 (a), we see lower performance. The diagonal is still observed, but less prominently than before. The two languages are relatively similar and belong to the same language family, so some similarity on the character level is to be expected. Nevertheless, the similarity matrix shows quite significant noise that obscures the diagonal. Note that even though the diagonal is not the best indicator of performance, the model was overall less efficient than expected.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Similarity Matrix Comparisons", "sec_num": "4.1" }, { "text": "One of the patterns that emerged is the high compatibility of the matrix's right hand side (German 'w', 'x', 'y' and 'z') with most of the English characters. High similarities can be found between these characters and most English characters across the board. This could be because of the low frequency of these letters in German. 2 Apart from that, there are few other clusters around the matrix, for example 'w', 'x' and 'y' for English and the 'h'-'o' range in German.", "cite_spans": [ { "start": 91, "end": 121, "text": "(German 'w', 'x', 'y' and 'z')", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Matrix Comparisons", "sec_num": "4.1" }, { "text": "When compared against a language further away from English than German, in this case Greek, similarities become even fainter. In Table 2 (b) the similarity matrix for EngEll is presented. We see weaker similarity scores across the board with results mostly random. As we show in Section 4.2, only a few characters were correctly aligned. In English, 'e', 'j', 't' and 'z' have high similarity scores overall, with '\u03c1', '\u03c3' and '\u03c4 ' in Greek scoring highly across the board as well.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Similarity Matrix Comparisons", "sec_num": "4.1" }, { "text": "Finally, in Table 2 (c), DeuEll is shown. There is very little that can be inferred from this matrix, since performance seems to be random.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Similarity Matrix Comparisons", "sec_num": "4.1" }, { "text": "Here we take a closer look at the similarity matrices and quantify how well our model aligns characters compared to a ground truth value. Even though ground truth alignment between characters is not always clear (for example, the English 'a' has multiple pronunciations: 'allure', 'ball', 'make', which would arguably map to three distinct characters in Greek), there are some obviously incorrect alignments we can observe (for example, the English 'a' should never be aligned with any of the Greek consonants). Thus, this max alignment method is an adequate indicator of model performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Max Alignment Accuracy", "sec_num": "4.2" }, { "text": "In Table 1 we examine alignments for EngDeu and EngEll. When choosing a target character, we search in the similarity matrix for the given pair of languages and choose the character with the maximum cosine similarity. In the natural language setups, the model fares fairly badly. In EngDeu the model correctly aligns 11 characters (this is of course a mere simplification, since alignments such as 'k' \u2192 'c' could also be considered correct; here we consider only the most basic of alignments: the ones on the diagonal). The situation is worse in EngEll, where only 3 characters were correctly aligned. EngF ake results are omitted, since the model correctly aligned all characters in all setups.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Max Alignment Accuracy", "sec_num": "4.2" }, { "text": "We also perform a qualitative study on contextualized character-level representations. We choose three pairs of words, feed them separately to the model and extract their 5th-layer representations (averages of all layers were also examined with similar results, as well as other individual layers; the best performing layer was chosen). After computing the contextual representations for all characters in the English word, we align them with the contextual representations of the characters in the Greek word. In Table 2 these alignments are shown.", "cite_spans": [], "ref_spans": [ { "start": 514, "end": 521, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Contextualized Embeddings", "sec_num": "4.3" }, { "text": "Results are seemingly random with a strong bias towards the diagonal 3 . The model, thus, has not learned any contextual representations either.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Embeddings", "sec_num": "4.3" }, { "text": "We also performed some minor experiments with static embeddings and adjustments to the MLM task, as well as a variation to our EngF ake setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Studies", "sec_num": "5" }, { "text": "Apart from the BERT embeddings, we also examined static embeddings, to introduce another baseline. Specifically, we used the FastText algorithm (Bojanowski et al., 2017) . Data was the same as the previous experiments. Results were seemingly random, with FastText unable to capture any meaningful representations. Cosine similarities were generally very low and alignments incoherent.", "cite_spans": [ { "start": 144, "end": 169, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Ablation Studies", "sec_num": "5" }, { "text": "For the MLM task, we experimented with different probabilities for token masking. In the origi-nal BERT paper, the chosen token is replaced by [MASK] 80% of the time, while it remains the same 10% of the time and gets replaced by a random token the rest 10%. We experimented with the following distributions: 80/20/0, 60/20/20 and 50/50/0 with no noticeable change in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Studies", "sec_num": "5" }, { "text": "We also experimented with overlapping Fake-English mappings in the form 'a' \u2192 '100 101 102'. That is, each character is mapped to the tokens corresponding to itself and its next two characters (eg. 'a' is mapped to the tokens for 'a ', 'b' and 'c') . This is an extreme case of the EngF ake f 1 f 2 setup, where each character is mapped to three (instead of 'f' getting mapped to 'f1' and 'f2'). Results deteriorated to random performance.", "cite_spans": [ { "start": 233, "end": 248, "text": "', 'b' and 'c')", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Ablation Studies", "sec_num": "5" }, { "text": "Finally, we conducted another EngF ake experiment. Namely, we restricted the context where we present each character. For the previous experiments each word was split in characters. Now, we further split each word into trigrams. So, instead of 'e x a m p l e' we end up with multiple overlapping trigram entries: 'e x a', 'x a m', 'a m p', etc. The motivation for this experiment is that since the transliteration of a character in a word doesn't need the entire word but instead only its direct neighbors, we should examine a setup with more restricted context 4 . In this case, results deteriorated heavily and are seemingly random with no meaningful representation captured.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Studies", "sec_num": "5" }, { "text": "In our work BERT is shown to be unable to create consistently good cross-lingual spaces on the character level. We train models on English, German, Greek and Fake-English and we compare characterlevel alignments between them. Cosine similarity matrices between the target and source alphabets were examined and we found that the closer two languages are, the better BERT does in aligning them. Fake-English is the easiest to align with English, whereas German is worse, with Greek trailing far behind. We conclude that BERT is not able to perform adequate character alignment. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We follow the ISO 639-3 standard for language codes: https://iso639-3.sil.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.sttmedia.com/characterfr equency-german", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To control for character positions, we also aligned characters without positional embeddings and results got worse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While this is a more nuanced subject, for the purposes of this ablation experiment we assumed this statement stood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by ERCAdG #740516. We want to thank the anonymous reviewers for their insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.421" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4623--4637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "7059--7069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32, pages 7059-7069. Curran Associates, Inc.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Don't forget the long tail! a comprehensive analysis of morphological generalization in bilingual lexicon induction", "authors": [ { "first": "Paula", "middle": [], "last": "Czarnowska", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "974--983", "other_ids": { "DOI": [ "10.18653/v1/D19-1090" ] }, "num": null, "urls": [], "raw_text": "Paula Czarnowska, Sebastian Ruder, Edouard Grave, Ryan Cotterell, and Ann Copestake. 2019. Don't forget the long tail! a comprehensive analysis of morphological generalization in bilingual lexicon in- duction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 974-983, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Identifying elements essential for BERT's multilinguality", "authors": [ { "first": "Philipp", "middle": [], "last": "Dufter", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4423--4437", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.358" ] }, "num": null, "urls": [], "raw_text": "Philipp Dufter and Hinrich Sch\u00fctze. 2020. Identifying elements essential for BERT's multilinguality. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4423-4437, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A simple, fast, and effective reparameterization of IBM model 2", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Chahuneau", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "644--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameteriza- tion of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "CharacterBERT: Reconciling ELMo and BERT for word-level open-vocabulary representations from characters", "authors": [ { "first": "Hicham", "middle": [ "El" ], "last": "Boukkouri", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Ferret", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lavergne", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Noji", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "6903--6915", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.609" ] }, "num": null, "urls": [], "raw_text": "Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, Hiroshi Noji, Pierre Zweigenbaum, and Jun'ichi Tsu- jii. 2020. CharacterBERT: Reconciling ELMo and BERT for word-level open-vocabulary representa- tions from characters. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 6903-6915, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Jointly learning to align and translate with transformer models", "authors": [ { "first": "Sarthak", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Peitz", "suffix": "" }, { "first": "Udhyakumar", "middle": [], "last": "Nallasamy", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Paulik", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4453--4462", "other_ids": { "DOI": [ "10.18653/v1/D19-1453" ] }, "num": null, "urls": [], "raw_text": "Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4453-4462, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings", "authors": [ { "first": "Masoud Jalili", "middle": [], "last": "Sabet", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Dufter", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", "volume": "", "issue": "", "pages": "1627--1643", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masoud Jalili Sabet, Philipp Dufter, Fran\u00e7ois Yvon, and Hinrich Sch\u00fctze. 2020. SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1627-1643, Online. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Cross-lingual ability of multilingual bert: An empirical study", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Conference Proceedings: the tenth Machine Translation Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Pro- ceedings: the tenth Machine Translation Summit, pages 79-86, Phuket, Thailand. AAMT, AAMT.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural network-based word alignment through score aggregation", "authors": [ { "first": "Jo\u00ebl", "middle": [], "last": "Legrand", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "1", "issue": "", "pages": "66--73", "other_ids": { "DOI": [ "10.18653/v1/W16-2207" ] }, "num": null, "urls": [], "raw_text": "Jo\u00ebl Legrand, Michael Auli, and Ronan Collobert. 2016. Neural network-based word alignment through score aggregation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Pa- pers, pages 66-73, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Report of NEWS 2009 machine transliteration shared task", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Kumaran", "suffix": "" }, { "first": "Min", "middle": [], "last": "Pervouchine", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration", "volume": "", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, A Kumaran, Vladimir Pervouchine, and Min Zhang. 2009. Report of NEWS 2009 machine transliteration shared task. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), pages 1-18, Suntec, Singapore. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "CharBERT: Characteraware pre-trained language model", "authors": [ { "first": "Wentao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Chenglei", "middle": [], "last": "Si", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "39--50", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.4" ] }, "num": null, "urls": [], "raw_text": "Wentao Ma, Yiming Cui, Chenglei Si, Ting Liu, Shijin Wang, and Guoping Hu. 2020. CharBERT: Character- aware pre-trained language model. In Proceedings of the 28th International Conference on Computa- tional Linguistics, pages 39-50, Barcelona, Spain (Online). International Committee on Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Systematic Comparison of Various Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": { "DOI": [ "10.1162/089120103321337421" ] }, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Statistical models for unsupervised, semi-supervised supervised transliteration mining", "authors": [ { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "2", "pages": "349--375", "other_ids": { "DOI": [ "10.1162/COLI_a_00286" ] }, "num": null, "urls": [], "raw_text": "Hassan Sajjad, Helmut Schmid, Alexander Fraser, and Hinrich Sch\u00fctze. 2017. Statistical models for unsu- pervised, semi-supervised supervised transliteration mining. Computational Linguistics, 43(2):349-375.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Efficient word alignment with markov chain monte carlo. The Prague Bulletin of Mathematical Linguistics", "authors": [ { "first": "Robert", "middle": [], "last": "\u00d6stling", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1515/pralin-2016-0013" ] }, "num": null, "urls": [], "raw_text": "Robert \u00d6stling and J\u00f6rg Tiedemann. 2016. Efficient word alignment with markov chain monte carlo. The Prague Bulletin of Mathematical Linguistics, 106.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "From characters to words to in between: Do we capture morphology?", "authors": [ { "first": "Clara", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2016--2027", "other_ids": { "DOI": [ "10.18653/v1/P17-1184" ] }, "num": null, "urls": [], "raw_text": "Clara Vania and Adam Lopez. 2017. From characters to words to in between: Do we capture morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2016-2027, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Applying the transformer to character-level transduction", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Mans", "middle": [], "last": "Hulden", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1901--1907", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shijie Wu, Ryan Cotterell, and Mans Hulden. 2021. Ap- plying the transformer to character-level transduction. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 1901-1907, Online. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adding interpretable attention to neural translation models improves word alignment", "authors": [ { "first": "Thomas", "middle": [], "last": "Zenkel", "suffix": "" }, { "first": "Joern", "middle": [], "last": "Wuebker", "suffix": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural trans- lation models improves word alignment.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "; Wu and Dredze (2019); Artetxe et al. (2020) zero-shot cross-lingual transfer is analyzed. Dufter and Sch\u00fctze (2020) further analyzes mBERT's capabilities with BERT's architecture and the structure of languages examined. The authors performed their experiments on a pairing of English with Fake-English, as proposed by K et al. (2020) in their rigorous empirical study of mBERT where linguistic properties of languages, architecture and learning objectives are investigated.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "(a) EngF ake (b) EngF ake f 1 f 2 Showing heatmaps for (a) EngF ake and (b) EngF ake f1f2 . The lighter green cells show lower cosine similarity, the darker green cells show higher cosine similarity.(a) EngDeu (b) EngEll (c) DeuEll", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Showing heatmaps for (a) EngDeu, (b) EngEll and (c) DeuEll. The lighter green cells show lower cosine similarity, the darker green cells show higher cosine similarity.", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "content": "