{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:08.312169Z" }, "title": "BERTologiCoMix * How does Code-Mixing interact with Multilingual BERT?", "authors": [ { "first": "Sebastin", "middle": [], "last": "Santy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "country": "India" } }, "email": "" }, { "first": "Anirudh", "middle": [], "last": "Srinivasan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "country": "India" } }, "email": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "country": "India" } }, "email": "monojitc@microsoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Models such as mBERT and XLMR have shown success in solving Code-Mixed NLP tasks even though they were not exposed to such text during pretraining. Code-Mixed NLP models have relied on using synthetically generated data along with naturally occurring data to improve their performance. Finetuning 1 mBERT on such data improves it's codemixed performance, but the benefits of using the different types of Code-Mixed data aren't clear. In this paper, we study the impact of finetuning with different types of code-mixed data and outline the changes that occur to the model during such finetuning. Our findings suggest that using naturally occurring code-mixed data brings in the best performance improvement after finetuning and that finetuning with any type of code-mixed text improves the responsivity of it's attention heads to code-mixed text inputs. * The word BERTologiCoMix is a portmanteau of BERTology and Code-Mixing, and is inspired from the title of the graphic novel: Logicomix: An Epic Search for Truth by Apostolos Doxiadis and Christos Papadimitriou (2009). \u2020 The authors contributed equally to the work. 1 In this paper, unless specifically stated, finetuning refers to MLM finetuning/continued pretraining and not downstream task finetuning (a) Vanilla mBERT (without fine-tuning) (b) mBERT fine-tuned on En-Hi CM", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Models such as mBERT and XLMR have shown success in solving Code-Mixed NLP tasks even though they were not exposed to such text during pretraining. Code-Mixed NLP models have relied on using synthetically generated data along with naturally occurring data to improve their performance. Finetuning 1 mBERT on such data improves it's codemixed performance, but the benefits of using the different types of Code-Mixed data aren't clear. In this paper, we study the impact of finetuning with different types of code-mixed data and outline the changes that occur to the model during such finetuning. Our findings suggest that using naturally occurring code-mixed data brings in the best performance improvement after finetuning and that finetuning with any type of code-mixed text improves the responsivity of it's attention heads to code-mixed text inputs. * The word BERTologiCoMix is a portmanteau of BERTology and Code-Mixing, and is inspired from the title of the graphic novel: Logicomix: An Epic Search for Truth by Apostolos Doxiadis and Christos Papadimitriou (2009). \u2020 The authors contributed equally to the work. 1 In this paper, unless specifically stated, finetuning refers to MLM finetuning/continued pretraining and not downstream task finetuning (a) Vanilla mBERT (without fine-tuning) (b) mBERT fine-tuned on En-Hi CM", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Massive multilingual models such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have recently become very popular as they cover over 100 languages and are capable of zero-shot transfer of performance in downstream tasks across languages. As these models serve as good multilingual representations of sentences (Pires et al., 2019) , there have been attempts at using these representations for encoding code-mixed sentences (Srinivasan, 2020; Aguilar et al., 2020; Khanuja et al., 2020) . Code-Mixing (CM) is the mixing of words belonging two or more languages within a Figure 1 : t-SNE representations of En-Es CM sentences on the respective models. Each color represents CM sentences with the same meaning but with different amounts of mixing generated based on the g-CM method in Sec 3.1. The tight clusters in (b) shows that CM sentence representations align better in mBERT fine-tuned on any CM data regardless of the language of mixing. single sentence and is a commonly observed phenomenon in societies with multiple spoken languages. These multilingual models have shown promise for solving CM tasks having surpassed the previously achieved performances (Khanuja et al., 2020; Aguilar et al., 2020) . This is an impressive feat considering that these models have never been exposed to any form of code-mixing during their pre-training stage.", "cite_spans": [ { "start": 42, "end": 63, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 73, "end": 95, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF8" }, { "start": 326, "end": 346, "text": "(Pires et al., 2019)", "ref_id": "BIBREF28" }, { "start": 439, "end": 457, "text": "(Srinivasan, 2020;", "ref_id": "BIBREF36" }, { "start": 458, "end": 479, "text": "Aguilar et al., 2020;", "ref_id": "BIBREF0" }, { "start": 480, "end": 501, "text": "Khanuja et al., 2020)", "ref_id": "BIBREF17" }, { "start": 829, "end": 832, "text": "(b)", "ref_id": null }, { "start": 1177, "end": 1199, "text": "(Khanuja et al., 2020;", "ref_id": "BIBREF17" }, { "start": 1200, "end": 1221, "text": "Aguilar et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 585, "end": 593, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditionally, CM has been a spoken phenomenon though it is slowly penetrating into written form of communication (Tay, 1989) . However, they mostly occur in an informal setting and hence such CM data is not publicly available in large quantities. Such scarcity of data would mean that building independent CM models can be unfeasible. With the onset of pre-trained multilingual models, further training with CM data can help in adapting these models for CM processing. However, even for further training, there is a requirement for a significant amount of data albeit lesser than starting from scratch. The amount of data available even for their monolingual counterparts is very less (Joshi et al., 2020) let alone the amount of real-world CM data. This can prove to be a bottleneck. Rightly so, there have been previous works exploring synthesis of CM data for the purpose of data augmentation (Bhat et al., 2016; Pratapa et al., 2018a) . Synthesis of CM mostly rely on certain linguistic theories (Poplack, 2000) to construct grammatically plausible sentences. These works have shown that using the synthetic and real CM data in a curriculum setting while fine-tuning can help with achieving better performances on the downstream CM tasks. Though this is analogous to adapting models to new domains, CM differs in that the adaptation is not purely at vocabulary or style level but rather at a grammatical level. Although it is known such adaptation techniques can bring an improvement, it is not well understood how exactly fine-tuning helps in the CM domain.", "cite_spans": [ { "start": 114, "end": 125, "text": "(Tay, 1989)", "ref_id": "BIBREF38" }, { "start": 686, "end": 706, "text": "(Joshi et al., 2020)", "ref_id": "BIBREF15" }, { "start": 897, "end": 916, "text": "(Bhat et al., 2016;", "ref_id": "BIBREF4" }, { "start": 917, "end": 939, "text": "Pratapa et al., 2018a)", "ref_id": "BIBREF31" }, { "start": 1001, "end": 1016, "text": "(Poplack, 2000)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Through this paper, we seek to answer these lingering questions which exist in area of CM processing. We first study the impact of finetuning multilingual models with different forms of CM data on downstream task performance. For this purpose, we rely on three forms of CM varying in their complexity of mixing, naturalness and obtainability -(i) randomly ordered code-mixing (l-CM), (ii) grammatically appropriate code-mixing (g-CM) both of which are synthetically generated and (iii) real-world code-mixing (r-CM). We perform this comparative analysis in a controlled setting where we finetune models with the same quantity of CM text belonging to different forms and then evaluate these finetuned models on 11 downstream tasks. We find that on average the r-CM performs better on all tasks, whereas the synthetic forms of CM (l-CM, g-CM) tend to diminish the performance as compared to the stock/non-finetuned models. However, these synthetic forms of data can be used in conjuction to r-CM in a curriculum setting which allows to alleviate the data scarcity issue. In order to understand the difference in the behavior of these models, we analyze their self-attention heads using a novel visualization technique and show how finetuning with CM causes the model to respond more effectively to CM texts. We notice that using r-CM for finetuning makes the model more robust and the representations more distributed leading to better and stable overall performances on the downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. Section 2 surveys prior work done in domain adaptation of transformer-based LMs, code-mixing and interpretability and analysis techniques. Section 3 introduces the different types of code-mixing and the models that we build with them. Section 4 and 5 respectively presents the task-based and attention-head based probing experiments along with the findings. Section 6 concludes the paper by summarizing the work and laying out future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pre-trained Language Models trained on generic data such as BERT and RoBERTa are often adapted to the domain where it is required to be used. Domain adaptation benefits BERT in two ways (i) it gives exposure to text in the domain specific contexts and (ii) adds domain specific terms to the vocabulary. BERT has been adapted to several domains especially once which have its own complex jargon of communication such as the biomedical domain (Lee et al., 2020; Alsentzer et al., 2019) , scientific texts or publications (Beltagy et al., 2019) , legal domain (Chalkidis et al., 2020) and financial document processing (Yang et al., 2020b) . Most of these works employ sophisticated techniques for mining large quantities of domain specific text from the internet and thus prefer to train the BERT model from scratch rather than fine-tuning the available BERT checkpoints. This is because they don't have to accommodate existing vocabulary along with the domain specific vocabulary which can lead to further fragmentation (Gu et al., 2020) . While most works have looked at domain adaptation by plainly continuing the training using MLM objectives, some works have explored on different techniques to improve downstream task performance. Ma et al. (2019) uses curriculum learning and domain-discriminative data selection for domain adaptation. Adversarial techniques have been used for enforce domain-invariant learning and thus improve on generalization (Naik and Rose, 2020; Zhang et al., 2020) . Ye et al. (2020) explores adapting BERT across languages. However, domain adaptation is not always effective and can lead to worse performances. This depends on several factors such as how different the domains are (Kashyap et al., 2020) or how much data is available (Zhang et al., 2020) .", "cite_spans": [ { "start": 441, "end": 459, "text": "(Lee et al., 2020;", "ref_id": "BIBREF21" }, { "start": 460, "end": 483, "text": "Alsentzer et al., 2019)", "ref_id": "BIBREF1" }, { "start": 519, "end": 541, "text": "(Beltagy et al., 2019)", "ref_id": "BIBREF3" }, { "start": 557, "end": 581, "text": "(Chalkidis et al., 2020)", "ref_id": "BIBREF5" }, { "start": 616, "end": 636, "text": "(Yang et al., 2020b)", "ref_id": "BIBREF44" }, { "start": 1019, "end": 1036, "text": "(Gu et al., 2020)", "ref_id": null }, { "start": 1235, "end": 1251, "text": "Ma et al. (2019)", "ref_id": "BIBREF23" }, { "start": 1452, "end": 1473, "text": "(Naik and Rose, 2020;", "ref_id": "BIBREF25" }, { "start": 1474, "end": 1493, "text": "Zhang et al., 2020)", "ref_id": "BIBREF46" }, { "start": 1496, "end": 1512, "text": "Ye et al. (2020)", "ref_id": "BIBREF45" }, { "start": 1711, "end": 1733, "text": "(Kashyap et al., 2020)", "ref_id": "BIBREF16" }, { "start": 1764, "end": 1784, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Domain Adaptation of BERT", "sec_num": "2.1" }, { "text": "Traditionally, Code-Mixing has been used in informal contexts and can be difficult to obtain in large quantities (Rijhwani et al., 2017) . This scarcity of data has been previously tackled by generation of synthetic CM data to augment the real CM data. Bhat et al. (2016) ; Pratapa et al. (2018a) demonstrate a technique to generate code-mixed sentences using parallel sentences and show that using these synthetic sentences can improve language model perplexity. A similar method is also proposed by Samanta et al. (2019) which uses parse trees to generate synthetic sentences. Yang et al. (2020a) generates CM sentences by using phrase tables to align and mix parts of a parallel sentence. Winata et al. 2019proposes a technique to generate codemixed sentences using pointer generator networks. The efficacy of synthetic CM data is evident from these works where they have been used in a curriculum setting for CM language modelling (Pratapa et al., 2018a) , cross-lingual training of multilingual transformer models (Yang et al., 2020a) as well as to develop CM embeddings as a better alternative to standard cross-lingual embeddings for CM tasks (Pratapa et al., 2018b) . In this work, we use grammatical theories to generate synthetic CM data from parallel sentences analogous to the aforementioned techniques.", "cite_spans": [ { "start": 113, "end": 136, "text": "(Rijhwani et al., 2017)", "ref_id": "BIBREF33" }, { "start": 253, "end": 271, "text": "Bhat et al. (2016)", "ref_id": "BIBREF4" }, { "start": 501, "end": 522, "text": "Samanta et al. (2019)", "ref_id": "BIBREF34" }, { "start": 579, "end": 598, "text": "Yang et al. (2020a)", "ref_id": "BIBREF43" }, { "start": 935, "end": 958, "text": "(Pratapa et al., 2018a)", "ref_id": "BIBREF31" }, { "start": 1019, "end": 1039, "text": "(Yang et al., 2020a)", "ref_id": "BIBREF43" }, { "start": 1150, "end": 1173, "text": "(Pratapa et al., 2018b)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Code-Mixing", "sec_num": "2.2" }, { "text": "Given the complex black-box nature of the BERT model, there have been a large number of works that propose experiments to probe and understand the working of different components of the BERT model. A large portion of these methods have focused on the attention mechanism of the transformer model. Clark et al. (2019) ; Htut et al. 2019find that certain attention heads encode linguistic dependencies between words of the sentence. Kovaleva et al. (2019) report on the patterns in the attention heads of BERT and find that a large number of heads just attend to the [CLS] or [SEP] tokens and do not encode any relation between the words of the sentence. Michel et al. (2019) ; Prasanna et al. (2020) also show that many of BERT's attention heads are redundant and pruning heads does not affect downstream task performance. In this paper, we borrow ideas from these works and propose a technique for visualizing the attention heads and how their behaviour changes during finetuning.", "cite_spans": [ { "start": 297, "end": 316, "text": "Clark et al. (2019)", "ref_id": "BIBREF7" }, { "start": 653, "end": 673, "text": "Michel et al. (2019)", "ref_id": "BIBREF24" }, { "start": 676, "end": 698, "text": "Prasanna et al. (2020)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Attention based probing", "sec_num": "2.3" }, { "text": "In this section, we describe the mBERT models, the modifications we make to them, and the types of CM data that we use for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "For the purpose of this study, we characterize CM data across two dimensions: linguistic complexity and languages involved. Here, we experiment with CM for two different language pairs: English-Spanish (enes) and English-Hindi (enhi). While Spanish has similar word order and a sizeable shared vocabulary with English, Hindi has a different word order and no shared vocabulary by virtue of using a different script. Thus, investigating through these two diverse pairs is expected to help us understand the representational variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Code-Mixing", "sec_num": "3.1" }, { "text": "The linguistic complexity of code-mixing can be categorized into the following three types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Code-Mixing", "sec_num": "3.1" }, { "text": "Lexical Code-Mixing (l-CM): The simplest form of code-mixing is to substitute lexical units within a monolingual sentence with its counterpart from the other language. This can be achieved by using parallel sentences, and aligning the words with an aligner (Dyer et al., 2013) .", "cite_spans": [ { "start": 257, "end": 276, "text": "(Dyer et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Types of Code-Mixing", "sec_num": "3.1" }, { "text": "Grammatical Code-Mixing (g-CM): There are grammatical constraints (Joshi, 1982; Poplack, 2000; Belazi et al., 1994) on word-order changes and lexical substitution during code-mixing that the l-CM does not take into account. Pratapa et al. (2018a) propose a technique to generate all grammatically valid CM sentences from a pair of parallel sentences. Here, we use this generated dataset as our g-CM 2 .", "cite_spans": [ { "start": 66, "end": 79, "text": "(Joshi, 1982;", "ref_id": "BIBREF14" }, { "start": 80, "end": 94, "text": "Poplack, 2000;", "ref_id": "BIBREF29" }, { "start": 95, "end": 115, "text": "Belazi et al., 1994)", "ref_id": "BIBREF2" }, { "start": 224, "end": 246, "text": "Pratapa et al. (2018a)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Types of Code-Mixing", "sec_num": "3.1" }, { "text": "Parse trees are generated for parallel sentences (between two languages L 1 and L 2 ), and common nodes between these parse trees are then replaced based on certain conditions specified by Equivalence Constraint (EC) theory (Poplack, 2000; Sankoff, 1998) , thereby producing a grammatically sound code-mixing. Fine-tuning with this form of CM should ideally impart the knowledge of grammatical boundaries for CM and would let us know whether a grammatically correct CM sentence is required to improve the performance.", "cite_spans": [ { "start": 224, "end": 239, "text": "(Poplack, 2000;", "ref_id": "BIBREF29" }, { "start": 240, "end": 254, "text": "Sankoff, 1998)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Types of Code-Mixing", "sec_num": "3.1" }, { "text": "Real Code-Mixing (r-CM): While g-CM considers purely the syntactic structure of CM, real-world Table 1 : Performance of the models for different tasks along with their standard deviations. The trained model language corresponds to the language the model is tested on, and is denoted by . r-CM trained models almost always perform better than models trained on other types of CM data. code-mixing is influenced by many more factors such as cultural/social and/or language-specific norms which comes in the semantic and pragmatics space of language understanding. Though r-CM is a subset of g-CM, there does not exist any method which can sample realistic CM from such synthetic data, hence we rely on real-world CM datasets. Fine-tuning with this form should let the model become aware of certain nuances of realworld code-mixing which are still not completely known.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 102, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Types of Code-Mixing", "sec_num": "3.1" }, { "text": "There are 3 [types] \u00d7 2 [language-pairs] = 6 combinations of data which can be obtained based on the previous specifications. For l-CM and g-CM, we use the same set of parallel sentences: en-es from Rijhwani et al. (2017) and en-hi from Kunchukuttan et al. (2018) . As CM is prominently used in informal contexts, it is difficult to procure textual r-CM data. We use twitter data from Rijhwani et al. (2017) for en-es; for en-hi, we use data from online forums and Twitter respectively from Chandu et al.", "cite_spans": [ { "start": 199, "end": 221, "text": "Rijhwani et al. (2017)", "ref_id": "BIBREF33" }, { "start": 237, "end": 263, "text": "Kunchukuttan et al. (2018)", "ref_id": "BIBREF20" }, { "start": 385, "end": 407, "text": "Rijhwani et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.2" }, { "text": "and Patro et al. (2017) . For each of the 6 combinations, we randomly sample 100,000 sentences which is then used to further train mBERT with the masked language modelling objective. We use layer-wise scaled learning rate while finetuning the models. Sun et al. 2019Model Notation: Let m be the vanilla mBERT, then m p,q are the mBERTs further trained on p, q data, where p \u2208 {l, g, r} is the complexity of mixing and q \u2208 {enes, enhi} is the language of mixing. For example, a model trained on English-Hindi lexical code-mixed data will be represented as m l,enhi . means that the model used depends on the configuration of the corresponding data. For example, m l, with enes data would mean that the model used is m l,enes while with enhi data would mean that the model used is m l,enhi .", "cite_spans": [ { "start": 4, "end": 23, "text": "Patro et al. (2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.2" }, { "text": "In this section, we describe layer-wise task-based probing of the different models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task-based Probing", "sec_num": "4" }, { "text": "Recently, two benchmarks for code-mixing were released: GLUECoS (Khanuja et al., 2020) and LINCE (Aguilar et al., 2020) . For this study, we probe with the following tasks from GLUECoS: Language Identification (LID), Part-of-Speech (POS) Tagging, Named Entity Recognition (NER) and Sentiment Analysis (SENT) for both enes and enhi, and Question Answering (QA) and Natural Language Inference (NLI) for only enhi.", "cite_spans": [ { "start": 64, "end": 86, "text": "(Khanuja et al., 2020)", "ref_id": "BIBREF17" }, { "start": 97, "end": 119, "text": "(Aguilar et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Tasks", "sec_num": "4.1" }, { "text": "We first measure the performance of these models on the aforementioned tasks. For each task, we fine-tune the models further after attaching a task specific classification layer. We report the average performances and standard deviations of each model run for 5 seeds in Table 1 . 3 In addition to getting absolute performances, we want to get an insight of how much each layer of the different models contribute to the performance of a particular task. Following Tenney et al. 2019, we measure the solvability of a task by finding out the expected layer at which the model is able to correctly solve the task. Here the mBERT weights are kept frozen and a weighted sum of representations from each layer are passed to the task specific layer. Figure 2 shows the layer-wise F1 scores for the tasks for different models and language pairs. We additionally calculate scalar mixing weights which lets us know the contribution of each layer by calculating the attention paid to each layer for the task.", "cite_spans": [ { "start": 281, "end": 282, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 1", "ref_id": null }, { "start": 743, "end": 751, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Method", "sec_num": "4.2" }, { "text": "From Table 1 it is clear that for almost all the tasks, m r, models perform better than the other finetuned models. In Particular, fine-tuning with r-CM data helps with SENT, NER, LID enes as well as QA tasks. While for POS, the performance remains almost same regardless of which data the model is fine-tuned with. 4 These differences are also reflected in the layerwise performance of these models as shown in Figure 2. The tasks are considered solved at the knee point where the performances start plateauing. The performances of different models start at the same note, and after a certain point m r, diverges to plateau at a higher performance than others. This can be attributed to final layers adapting the most during MLM fine-tuning. Kovaleva et al., 2019) . LID gets solved around 2 nd layer. enhi LID gives a relatively high performance at the 0 th layer indicating that it only needs the to-ken+positional embeddings. This is because enhi LID task has en and hi words in different scripts, which means it can be solved even with a simple unicode classification rule. POS gets solved at around 4 th layer. The indifference to fine-tuning observed in case of POS is reflected here as well, as all the models are performing equally at all the layers for both the languages.", "cite_spans": [ { "start": 316, "end": 317, "text": "4", "ref_id": null }, { "start": 743, "end": 765, "text": "Kovaleva et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 1", "ref_id": null }, { "start": 412, "end": 418, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Observations", "sec_num": "4.3" }, { "text": "NER gets solved around the 5 th layer. Here, r-CM training seems to help for enhi, perhaps due to exposure to more world knowledge which is required for NER. SENT shows an interesting shift in patterns. We can see that m l,enes solves the task at 6 th layer whereas the other models solve it at around 8 th layer. Thus, the general trend observed is that easier tasks like LID, POS are solved in the earlier layers and as the complexity of the tasks increase, the effective layer moves deeper -which shows a neat pattern of how BERT \"re-discovers\" the NLP pipeline (Tenney et al., 2019) , or rather the CM pipeline in our case.", "cite_spans": [ { "start": 565, "end": 586, "text": "(Tenney et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "4.3" }, { "text": "As observed earlier, exposing mBERT to r-CM help boost its overall and layer-wise performance on CM tasks. In this section, we describe three structural probing experiments, through which we will try to visualize the structural changes in the network, if any, induced by continued pre-training with CM data that are responsible for performance gains. We will first look at whether there are any changes in the behaviour of attention heads at a global level by checking the inter-head distances within a model. Further, we want to localize and identify the heads whose behaviours have changed. Finally, we take a look at how the attention heads respond to code-mixed stimulus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Probing", "sec_num": "5" }, { "text": "The probes for conducting the experiments consist of CM and Monolingual sentences. We take a sample of 1000 sentences for each type of CM as well as monolingual sentences for each language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probes", "sec_num": "5.1" }, { "text": "Probe Notation: To denote these probes, we use d p,q such that p \u2208 {l, g, r} is the complexity of mixing and q \u2208 {enes, enhi, en, hi, es} are the languages. For example, English-Spanish lexical CM data is represented as d l,enhi and (real) Spanish monolingual data is represented as d \u2212,es . indicates that model trained in the same language (code-mixed) as probe is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probes", "sec_num": "5.1" }, { "text": "Has anything changed within the models due to pre-training with CM datasets? In order to answer this question, we look at the global patterns of relative distances between the attention heads within a model. Method: Clark et al. (2019) describes an interhead similarity measure which allows for visualizing distances between each attention head with another within a model. The distance d between two heads H i and H j is calculated as,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Patterns of Change", "sec_num": "5.2" }, { "text": "d(H i , H j ) = token\u2208 sentence JS(H i (token), H j (token)) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Patterns of Change", "sec_num": "5.2" }, { "text": "where JS is the Jensen-Shannon Divergence between attention distributions. We average these distances obtained across 1000 sentences (d , ). Further, in order to visualize these head distances, we use multidimensional scaling (Kruskal, 1964) which preserves the relative distance better than other scaling methods such as T-SNE or PCA (Van Der Maaten et al., 2009) .", "cite_spans": [ { "start": 226, "end": 241, "text": "(Kruskal, 1964)", "ref_id": "BIBREF19" }, { "start": 335, "end": 364, "text": "(Van Der Maaten et al., 2009)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Global Patterns of Change", "sec_num": "5.2" }, { "text": "Observation: Figure 3 shows the two-dimensional projections of the heads labeled by the layers. There are clear differences between the patterns in m and the other models, though the same cannot be said for the probes. m shows a rather distributed representation of heads across layers; in particular, g-CM models have a tightly packed representation especially for the later layers.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Global Patterns of Change", "sec_num": "5.2" }, { "text": "Attention patterns of which heads have changed?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Patterns of Change", "sec_num": "5.3" }, { "text": "We observe that there is a change in the overall internal representations based on the type of data which the models are exposed to. It would be interesting to know which specific attention heads, or layers are most affected by the exposure to CM data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Patterns of Change", "sec_num": "5.3" }, { "text": "Method: In order to contrast the attention patterns of specific heads between m , and the base model -m , we calculate the distance between their corresponding heads as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Patterns of Change", "sec_num": "5.3" }, { "text": "\u2206 m = JS(H m , i,j (token), H m i,j (token)) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Patterns of Change", "sec_num": "5.3" }, { "text": "where JS is the Jensen-Shannon Divergence, i and j are the layers and their respective heads, m , is any model in the set of fine-tuned models and m is the vanilla model. We visualize these distances in form of heatmaps (\u2206 m maps). For the sake of clarity, only top 15 attention heads is plotted for each \u2206 m map. The darker the head, the more the head has changed between a particular trained model and the vanilla model. Visual triangulation can let us understand if there are common heads between sets of models and probes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Patterns of Change", "sec_num": "5.3" }, { "text": "Observation: Figure 4 depicts the different combinations of \u2206 m maps. It can be seen how there are common heads between different configurations of trained models as well as the inference data which is used. Here, even the difference between different languages and forms of code-mixing stand out compared to the previous analysis. We also look at cross-interaction of languages: fine-tuned in one language and probed on another. Through visual examination, we highlight some of the common heads which are present among the different \u2206 m plots.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Local Patterns of Change", "sec_num": "5.3" }, { "text": "How do attention heads respond to code-mixed probes? The common patterns in the way heads are functioning between different models and probes are easily observable from the set of \u2206 m maps. These do point us to certain heads getting more activated while encoding a particular type of CM sentence. In this section, we want to understand Figure 4 : \u2206 m maps for different configurations of trained models and probes. The first row of maps depict head interactions within same language whereas the second row of maps depict cross-language interaction i.e. trained and probed on different languages. Some of the common heads that can be observed have been marked to show the patterns which differentiate between complexity and language of models and probes. how these heads respond to input probes. We borrow the term responsivity, R, from the field of neuroscience which is used to summarize the change in the neural response per unit signal (stimulus) strength. In this context, we want to understand the change in attention head response of different models when exposed to CM data which act as the stimulus.", "cite_spans": [], "ref_spans": [ { "start": 336, "end": 344, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Responsivity to Code-Mixing", "sec_num": "5.4" }, { "text": "Method: Our aim is to understand the excitement of different heads when they see code-mixed data as a stimulus. To this end, we design a classification experiment to quantify the excitement of each head (/ neuron) while distinguishing between monolingual and CM classes. For the CM class, we randomly sample 2000 sentences from r-CM in the same way as we did for probes. Similarly, for monolingual class, we sample 1000 sentences each from en and es or hi. Each probe is then passed through the different models to obtain the attentions. To summarize the net attention for each head, we average the attentions over all the tokens These average attention heads are then used as features (x) (12 \u00d7 12 = 144 features) with the monolingual and CM classes being the predictor variable (y). To capture the relative excitement of different heads to y, we define responsivity (R) as the gain of information of each feature (or heads) in context of the prediction variable (y). This is analogous to Information Gain used in determining feature importance. Hence, Responsivity of a head x for class y can be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Responsivity to Code-Mixing", "sec_num": "5.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R x,y = H(x) \u2212 H(x|y)", "eq_num": "(3)" } ], "section": "Responsivity to Code-Mixing", "sec_num": "5.4" }, { "text": "where, H(x) is the entropy of class distribution for x and H(x|y) is the conditional entropy for x given y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Responsivity to Code-Mixing", "sec_num": "5.4" }, { "text": "Observation: As shown in Figure 5 , we plot the responsivity of different attention heads to CM in the form of 12 \u00d7 12 R heatmaps. We also plot the distribution of these values. We report two values, mean responsivity (\u00b5) of a model to code-mixing and kurtosis (\u03ba) to measure the skewness or the tailedness of the distribution compared to a normal distribution. It can be observed from the heatmaps that there are certain common heads such as (1, 0), (2, 9) which are highly responsive to CM. As we pump in different types of CM data, we can observe that responsivity of some heads [(5, 10), (6,9)] are reducing while for other heads [(1, 7), (4, 8)] it is spiking up. A distinctive pattern that can be noticed from the heatmaps is that as CM data is fed to the models in the order of their linguistic complexity, more and more heads are responding towards the CM stimulus. Even the distribution density curve widens as confirmed by decreasing Kurtosis.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Responsivity to Code-Mixing", "sec_num": "5.4" }, { "text": "As described earlier, there is no single point in the network which responds to CM data. Previous studies (Elazar et al., 2020) involving probing of specific regions to understand their independent contributions to solving any task has been somewhat futile. It has been observed that heads collec-tively work towards solving tasks, and such specific regions cannot be demarcated -which means that information pertaining to task-solving is represented in a distributed fashion. In line with this, it has been shown that these models can be significantly pruned during inference with minimal drop in performance (Michel et al., 2019; Kovaleva et al., 2019) . Our study confirms these observations for code-mixing as well, through a different visualization approach.", "cite_spans": [ { "start": 106, "end": 127, "text": "(Elazar et al., 2020)", "ref_id": "BIBREF11" }, { "start": 610, "end": 631, "text": "(Michel et al., 2019;", "ref_id": "BIBREF24" }, { "start": 632, "end": 654, "text": "Kovaleva et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Responsivity to Code-Mixing", "sec_num": "5.4" }, { "text": "In this work, we develop different methods of finetuning BERT-like models for CM processing. We then compare the downstream task performances of these models through absolute performance, their stability as well as the layer-wise solvability of certain tasks. To further understand the varied performances between the three types of CM, we perform structural probing. We adopted an existing approach and introduced a couple of new techniques for the visualization of the attention heads as a response to probes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "The most important finding from these probing experiments is that there are discernable changes introduced in the models due to exposure to CM data, of which a particularly interesting observation is that this exposure increases the overall responsivity of the attention heads to CM. As of now, these experiments are purely analytical in nature where we observed how the attention heads behave on a CM stimuli. One future direction is to expand the analysis to a wider range of domains and fine-tuning experiments to understand how generalizable are our findings of distributed information in BERT-like models. We use a fairly simple and easily replicable method for testing this through the responsivity metric that we propose. This method can be further improved to rigorously verify our observations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "It is important to note that Pratapa et al. (2018a) uses GCM to denote \"Generated CM\" data, and not for \"grammatical\" as is used here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As we use just 100k sentences as opposed to 3M sentences, we do not get the same performance jump reported byKhanuja et al. (2020).4 We also carried out training in a curriculum fashion where synthetic CM data was first introduced followed by real CM data in different ratios similar toPratapa et al. (2018a). However, we do not include these numbers as we could not derive any meaningful insights from them. This can most probably be due to a fixed constraint of 100k sentences that we use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their many insightful comments and suggestions on our paper. We also thank Tanuja Ganu and Amit Deshpande for their valuable feedback on some of our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "LinCE: A centralized benchmark for linguistic code-switching evaluation", "authors": [ { "first": "Gustavo", "middle": [], "last": "Aguilar", "suffix": "" }, { "first": "Sudipta", "middle": [], "last": "Kar", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1803--1813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Aguilar, Sudipta Kar, and Thamar Solorio. 2020. LinCE: A centralized benchmark for linguis- tic code-switching evaluation. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 1803-1813, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Publicly available clinical BERT embeddings", "authors": [ { "first": "Emily", "middle": [], "last": "Alsentzer", "suffix": "" }, { "first": "John", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "William", "middle": [], "last": "Boag", "suffix": "" }, { "first": "Wei-Hung", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jindi", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Mcdermott", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "72--78", "other_ids": { "DOI": [ "10.18653/v1/W19-1909" ] }, "num": null, "urls": [], "raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Code switching and x-bar theory: The functional head constraint", "authors": [ { "first": "M", "middle": [], "last": "Hedi", "suffix": "" }, { "first": "", "middle": [], "last": "Belazi", "suffix": "" }, { "first": "J", "middle": [], "last": "Edward", "suffix": "" }, { "first": "Almeida Jacqueline", "middle": [], "last": "Rubin", "suffix": "" }, { "first": "", "middle": [], "last": "Toribio", "suffix": "" } ], "year": 1994, "venue": "Linguistic inquiry", "volume": "", "issue": "", "pages": "221--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hedi M Belazi, Edward J Rubin, and Almeida Jacque- line Toribio. 1994. Code switching and x-bar theory: The functional head constraint. Linguistic inquiry, pages 221-237.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "SciB-ERT: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3615--3620", "other_ids": { "DOI": [ "10.18653/v1/D19-1371" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Grammatical constraints on intra-sentential code-switching: From theories to working models", "authors": [ { "first": "Gayatri", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Kalika", "middle": [], "last": "Bali", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1612.04538" ] }, "num": null, "urls": [], "raw_text": "Gayatri Bhat, Monojit Choudhury, and Kalika Bali. 2016. Grammatical constraints on intra-sentential code-switching: From theories to working models. arXiv preprint arXiv:1612.04538.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos", "authors": [ { "first": "Ilias", "middle": [], "last": "Chalkidis", "suffix": "" }, { "first": "Manos", "middle": [], "last": "Fergadiotis", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2898--2904", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.261" ] }, "num": null, "urls": [], "raw_text": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898- 2904, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Language informed modeling of code-switched text", "authors": [ { "first": "Khyathi", "middle": [], "last": "Chandu", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Manzini", "suffix": "" }, { "first": "Sumeet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching", "volume": "", "issue": "", "pages": "92--97", "other_ids": { "DOI": [ "10.18653/v1/W18-3211" ] }, "num": null, "urls": [], "raw_text": "Khyathi Chandu, Thomas Manzini, Sumeet Singh, and Alan W. Black. 2018. Language informed modeling of code-switched text. In Proceedings of the Third Workshop on Computational Approaches to Linguis- tic Code-Switching, pages 92-97, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What does BERT look at? an analysis of BERT's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "276--286", "other_ids": { "DOI": [ "10.18653/v1/W19-4828" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A simple, fast, and effective reparameterization of IBM model 2", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Chahuneau", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "644--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameter- ization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648, At- lanta, Georgia. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "When bert forgets how to pos: Amnesic probing of linguistic properties and mlm predictions", "authors": [ { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.00995" ] }, "num": null, "urls": [], "raw_text": "Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. When bert forgets how to pos: Am- nesic probing of linguistic properties and mlm pre- dictions. arXiv preprint arXiv:2006.00995.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing", "authors": [ { "first": "Yu", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Tinn", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Lucas", "suffix": "" }, { "first": "Naoto", "middle": [], "last": "Usuyama", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Naumann", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedical natural language processing.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Do attention heads in bert track syntactic dependencies?", "authors": [ { "first": "Jason", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bordia", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do attention heads in bert track syntactic dependencies?", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Processing of sentences with intra-sentential code-switching", "authors": [ { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 1982, "venue": "Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aravind Joshi. 1982. Processing of sentences with intra-sentential code-switching. In Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "authors": [ { "first": "Pratik", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Sebastin", "middle": [], "last": "Santy", "suffix": "" }, { "first": "Amar", "middle": [], "last": "Budhiraja", "suffix": "" }, { "first": "Kalika", "middle": [], "last": "Bali", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6282--6293", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.560" ] }, "num": null, "urls": [], "raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Domain divergences: a survey and empirical analysis", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Abhinav Ramesh Kashyap", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Kan", "suffix": "" }, { "first": "", "middle": [], "last": "Zimmermann", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.12198" ] }, "num": null, "urls": [], "raw_text": "Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min- Yen Kan, and Roger Zimmermann. 2020. Domain divergences: a survey and empirical analysis. arXiv preprint arXiv:2010.12198.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "GLUECoS: An evaluation benchmark for code-switched NLP", "authors": [ { "first": "Simran", "middle": [], "last": "Khanuja", "suffix": "" }, { "first": "Sandipan", "middle": [], "last": "Dandapat", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Sunayana", "middle": [], "last": "Sitaram", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3575--3585", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.329" ] }, "num": null, "urls": [], "raw_text": "Simran Khanuja, Sandipan Dandapat, Anirudh Srini- vasan, Sunayana Sitaram, and Monojit Choudhury. 2020. GLUECoS: An evaluation benchmark for code-switched NLP. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3575-3585, Online. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Revealing the dark secrets of BERT", "authors": [ { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4365--4374", "other_ids": { "DOI": [ "10.18653/v1/D19-1445" ] }, "num": null, "urls": [], "raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Nonmetric multidimensional scaling: a numerical method", "authors": [ { "first": "B", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "", "middle": [], "last": "Kruskal", "suffix": "" } ], "year": 1964, "venue": "Psychometrika", "volume": "29", "issue": "2", "pages": "115--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph B Kruskal. 1964. Nonmetric multidimen- sional scaling: a numerical method. Psychometrika, 29(2):115-129.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The IIT Bombay English-Hindi parallel corpus", "authors": [ { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Pratik", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhat- tacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh In- ternational Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Euro- pean Language Resources Association (ELRA).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Bioinformatics", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": { "DOI": [ "10.18653/v1/N19-1112" ] }, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Domain adaptation with BERT-based domain classification and data selection", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "76--83", "other_ids": { "DOI": [ "10.18653/v1/D19-6109" ] }, "num": null, "urls": [], "raw_text": "Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nalla- pati, and Bing Xiang. 2019. Domain adaptation with BERT-based domain classification and data se- lection. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 76-83, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Are sixteen heads really better than one?", "authors": [ { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Ad- vances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Towards open domain event trigger identification using adversarial domain adaptation", "authors": [ { "first": "Aakanksha", "middle": [], "last": "Naik", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Rose", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7618--7624", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.681" ] }, "num": null, "urls": [], "raw_text": "Aakanksha Naik and Carolyn Rose. 2020. Towards open domain event trigger identification using ad- versarial domain adaptation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7618-7624, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "All that is English may be Hindi: Enhancing language identification through automatic ranking of the likeliness of word borrowing in social media", "authors": [ { "first": "Jasabanta", "middle": [], "last": "Patro", "suffix": "" }, { "first": "Bidisha", "middle": [], "last": "Samanta", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Abhipsa", "middle": [], "last": "Basu", "suffix": "" }, { "first": "Prithwish", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2264--2274", "other_ids": { "DOI": [ "10.18653/v1/D17-1240" ] }, "num": null, "urls": [], "raw_text": "Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Ab- hipsa Basu, Prithwish Mukherjee, Monojit Choud- hury, and Animesh Mukherjee. 2017. All that is English may be Hindi: Enhancing language identi- fication through automatic ranking of the likeliness of word borrowing in social media. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2264-2274, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets", "authors": [ { "first": "Yifan", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Shankai", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", "volume": "", "issue": "", "pages": "58--65", "other_ids": { "DOI": [ "10.18653/v1/W19-5006" ] }, "num": null, "urls": [], "raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58- 65, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Sometimes i'll start a sentence in spanish y termino en espa\u00f1ol: Toward a typology of code-switching", "authors": [ { "first": "Shana", "middle": [], "last": "Poplack", "suffix": "" } ], "year": 2000, "venue": "The bilingualism reader", "volume": "18", "issue": "2", "pages": "221--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shana Poplack. 2000. Sometimes i'll start a sentence in spanish y termino en espa\u00f1ol: Toward a typol- ogy of code-switching. The bilingualism reader, 18(2):221-256.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "When BERT Plays the Lottery, All Tickets Are Winning", "authors": [ { "first": "Sai", "middle": [], "last": "Prasanna", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "3208--3229", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.259" ] }, "num": null, "urls": [], "raw_text": "Sai Prasanna, Anna Rogers, and Anna Rumshisky. 2020. When BERT Plays the Lottery, All Tickets Are Winning. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 3208-3229, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Language modeling for code-mixing: The role of linguistic theory based synthetic data", "authors": [ { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "" }, { "first": "Gayatri", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Sunayana", "middle": [], "last": "Sitaram", "suffix": "" }, { "first": "Sandipan", "middle": [], "last": "Dandapat", "suffix": "" }, { "first": "Kalika", "middle": [], "last": "Bali", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1543--1553", "other_ids": { "DOI": [ "10.18653/v1/P18-1143" ] }, "num": null, "urls": [], "raw_text": "Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018a. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1543-1553, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Word embeddings for code-mixed language processing", "authors": [ { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Sunayana", "middle": [], "last": "Sitaram", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3067--3072", "other_ids": { "DOI": [ "10.18653/v1/D18-1344" ] }, "num": null, "urls": [], "raw_text": "Adithya Pratapa, Monojit Choudhury, and Sunayana Sitaram. 2018b. Word embeddings for code-mixed language processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3067-3072, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Estimating code-switching on Twitter with a novel generalized word-level language detection technique", "authors": [ { "first": "Shruti", "middle": [], "last": "Rijhwani", "suffix": "" }, { "first": "Royal", "middle": [], "last": "Sequiera", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Kalika", "middle": [], "last": "Bali", "suffix": "" }, { "first": "Chandra Shekhar", "middle": [], "last": "Maddila", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1971--1982", "other_ids": { "DOI": [ "10.18653/v1/P17-1180" ] }, "num": null, "urls": [], "raw_text": "Shruti Rijhwani, Royal Sequiera, Monojit Choud- hury, Kalika Bali, and Chandra Shekhar Maddila. 2017. Estimating code-switching on Twitter with a novel generalized word-level language detection technique. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1971-1982, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Improved sentiment detection via label transfer from monolingual to synthetic code-switched text", "authors": [ { "first": "Bidisha", "middle": [], "last": "Samanta", "suffix": "" }, { "first": "Niloy", "middle": [], "last": "Ganguly", "suffix": "" }, { "first": "Soumen", "middle": [], "last": "Chakrabarti", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3528--3537", "other_ids": { "DOI": [ "10.18653/v1/P19-1343" ] }, "num": null, "urls": [], "raw_text": "Bidisha Samanta, Niloy Ganguly, and Soumen Chakrabarti. 2019. Improved sentiment detection via label transfer from monolingual to synthetic code-switched text. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3528-3537, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The production of code-mixed discourse", "authors": [ { "first": "David", "middle": [], "last": "Sankoff", "suffix": "" } ], "year": 1998, "venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "8--21", "other_ids": { "DOI": [ "10.3115/980845.980848" ] }, "num": null, "urls": [], "raw_text": "David Sankoff. 1998. The production of code-mixed discourse. In 36th Annual Meeting of the Associa- tion for Computational Linguistics and 17th Inter- national Conference on Computational Linguistics, Volume 1, pages 8-21, Montreal, Quebec, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "MSR India at SemEval-2020 task 9: Multilingual models can do codemixing too", "authors": [ { "first": "Anirudh", "middle": [], "last": "Srinivasan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "951--956", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anirudh Srinivasan. 2020. MSR India at SemEval- 2020 task 9: Multilingual models can do code- mixing too. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 951-956, Barcelona (online). International Committee for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "How to fine-tune bert for text classification?", "authors": [ { "first": "Chi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "China National Conference on Chinese Computational Linguistics", "volume": "", "issue": "", "pages": "194--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Code switching and code mixing as a communicative strategy in multilingual discourse", "authors": [ { "first": "W", "middle": [ "J" ], "last": "Mary", "suffix": "" }, { "first": "", "middle": [], "last": "Tay", "suffix": "" } ], "year": 1989, "venue": "World Englishes", "volume": "8", "issue": "3", "pages": "407--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary WJ Tay. 1989. Code switching and code mix- ing as a communicative strategy in multilingual dis- course. World Englishes, 8(3):407-417.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Dimensionality reduction: a comparative", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Postma", "suffix": "" }, { "first": "Jaap", "middle": [], "last": "Van Den Herik", "suffix": "" } ], "year": 2009, "venue": "J Mach Learn Res", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens Van Der Maaten, Eric Postma, and Jaap Van den Herik. 2009. Dimensionality reduction: a comparative. J Mach Learn Res, 10(66-71):13.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Adversarial domain adaptation for machine reading comprehension", "authors": [ { "first": "Huazheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2510--2520", "other_ids": { "DOI": [ "10.18653/v1/D19-1254" ] }, "num": null, "urls": [], "raw_text": "Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, and Hongning Wang. 2019. Adversar- ial domain adaptation for machine reading compre- hension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2510-2520, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Code-switched language models using neural based synthetic data from parallel sentences", "authors": [ { "first": "Andrea", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "271--280", "other_ids": { "DOI": [ "10.18653/v1/K19-1026" ] }, "num": null, "urls": [], "raw_text": "Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019. Code-switched lan- guage models using neural based synthetic data from parallel sentences. In Proceedings of the 23rd Con- ference on Computational Natural Language Learn- ing (CoNLL), pages 271-280, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Alternating language modeling for cross-lingual pre-training", "authors": [ { "first": "Jian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Dongdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuangzhi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "9386--9393", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6480" ] }, "num": null, "urls": [], "raw_text": "Jian Yang, Shuming Ma, Dongdong Zhang, ShuangZhi Wu, Zhoujun Li, and Ming Zhou. 2020a. Alternat- ing language modeling for cross-lingual pre-training. Proceedings of the AAAI Conference on Artificial In- telligence, 34(05):9386-9393.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Finbert: A pretrained language model for financial communications", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mark", "middle": [ "Christopher" ], "last": "Siy", "suffix": "" }, { "first": "U", "middle": [ "Y" ], "last": "", "suffix": "" }, { "first": "Allen", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Yang, Mark Christopher Siy UY, and Allen Huang. 2020b. Finbert: A pretrained language model for financial communications.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Feature adaptation of pre-trained language models across languages and domains with robust self-training", "authors": [ { "first": "Hai", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Qingyu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Ruidan", "middle": [], "last": "He", "suffix": "" }, { "first": "Juntao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7386--7399", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.599" ] }, "num": null, "urls": [], "raw_text": "Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, and Lidong Bing. 2020. Feature adaptation of pre-trained language models across languages and domains with robust self-training. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 7386- 7399, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A little bit is worse than none: Ranking with limited training data", "authors": [ { "first": "Xinyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Yates", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing", "volume": "", "issue": "", "pages": "107--112", "other_ids": { "DOI": [ "10.18653/v1/2020.sustainlp-1.14" ] }, "num": null, "urls": [], "raw_text": "Xinyu Zhang, Andrew Yates, and Jimmy Lin. 2020. A little bit is worse than none: Ranking with lim- ited training data. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 107-112, Online. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Layer-wise F1 scores for LID, POS, NER and SENT respectively across different layers. The dashed lines represent the enhi versions and solid lines represent the enes versions of different tasks." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Intra-head distances d(H i , H j ) for models.The points are colored layer-wise and follows dark blue \u2192 light blue/red \u2192 dark red scheme. The rows are the models which are used and columns are the different set of probes used." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "after removing [CLS] & [SEP] tokens present in that head. ([CLS] & [SEP]) tokens are removed as they act as a sink to non-attended tokens." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "R of different models when classifying Monolingual vs. Code-mixing sentence" }, "TABREF0": { "text": "81\u00b12.5 58.42\u00b11.1 59.50\u00b10.9 75.55\u00b10.6 93.35\u00b10.2 87.49\u00b10.1 63.40\u00b10.5 95.99\u00b10.0 95.80\u00b10.4 71.95\u00b10.8 63.25\u00b11.9 m l, 68.07\u00b11.5 58.08\u00b10.8 59.39\u00b11.0 76.53\u00b11.0 93.84\u00b10.1 88.00\u00b10.2 64.09\u00b10.2 96.09\u00b10.1 95.32\u00b10.9 70.53\u00b13.5 62.94\u00b12.7 m g, 68.64\u00b11.5 57.90\u00b11.1 59.88\u00b10.7 76.86\u00b10.6 93.74\u00b10.1 87.79\u00b10.2 63.79\u00b10.2 96.06\u00b10.0 95.41\u00b10.8 70.11\u00b11.8 55.19\u00b16.5 m r, 68.51\u00b10.7 58.25\u00b10.8 60.46\u00b10.6 76.86\u00b10.5 93.68\u00b10.1 88.00\u00b10.0 63.38\u00b10.0 96.12\u00b10.0 94.60\u00b10.2 73.54\u00b13.9 60.00\u00b15.7", "type_str": "table", "content": "
SENTNERPOSLIDQANLI
modelenesenhienesehienesenhienhienesenhienhienhi
m67.
", "html": null, "num": null } } } }