{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:55.410705Z" }, "title": "The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding", "authors": [ { "first": "Archiki", "middle": [], "last": "Prasad", "suffix": "", "affiliation": { "laboratory": "", "institution": "UNC Chapel Hill", "location": {} }, "email": "archiki@cs.unc.edu" }, { "first": "Mohammad", "middle": [ "Ali" ], "last": "Rehan", "suffix": "", "affiliation": {}, "email": "alirehan@cse.iitb.ac.in" }, { "first": "Shreya", "middle": [], "last": "Pathak", "suffix": "", "affiliation": {}, "email": "shreyapathak@cse.iitb.ac.in" }, { "first": "Preethi", "middle": [], "last": "Jyothi", "suffix": "", "affiliation": {}, "email": "pjyothi@cse.iitb.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While recent benchmarks have spurred a lot of new work on improving the generalization of pretrained multilingual language models on multilingual tasks, techniques to improve code-switched natural language understanding tasks have been far less explored. In this work, we propose the use of bilingual intermediate pretraining as a reliable technique to derive large and consistent performance gains using code-switched text on three different NLP tasks: Natural Language Inference (NLI), Question Answering (QA) and Sentiment Analysis (SA). We show consistent performance gains on four different code-switched language-pairs (Hindi-English, Spanish-English, Tamil-English and Malayalam-English) for SA and on Hindi-English for NLI and QA. We also present a code-switched masked language modeling (MLM) pretraining technique that consistently benefits SA compared to standard MLM pretraining using real code-switched text.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "While recent benchmarks have spurred a lot of new work on improving the generalization of pretrained multilingual language models on multilingual tasks, techniques to improve code-switched natural language understanding tasks have been far less explored. In this work, we propose the use of bilingual intermediate pretraining as a reliable technique to derive large and consistent performance gains using code-switched text on three different NLP tasks: Natural Language Inference (NLI), Question Answering (QA) and Sentiment Analysis (SA). We show consistent performance gains on four different code-switched language-pairs (Hindi-English, Spanish-English, Tamil-English and Malayalam-English) for SA and on Hindi-English for NLI and QA. We also present a code-switched masked language modeling (MLM) pretraining technique that consistently benefits SA compared to standard MLM pretraining using real code-switched text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Code-switching is a widely-occurring linguistic phenomenon in which multiple languages are used within the span of a single utterance or conversation. While large pretrained multilingual models like mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) have been successfully used for low-resource languages and effective zero-shot cross-lingual transfer (Pires et al., 2019; Conneau et al., 2020; Wu and Dredze, 2019) , techniques to help these models generalize to code-switched text have not been sufficiently explored. Intermediate-task training (Phang et al., 2018 was recently proposed as an effective training strategy for transfer learning. This scheme involves fine-tuning a pretrained model on data from one or more intermediate tasks, followed by fine-tuning on the target task. The intermediate task could differ from the target task and it could also be in a different language. This technique was shown to help with both task-based and language-based transfer; it benefited target tasks in English (Vu et al., 2020) and helped improve zero-shot crosslingual transfer . In this work, we introduce bilingual intermediatetask training as a reliable training strategy to improve performance on three code-switched natural language understanding tasks: Natural Language Inference (NLI), factoid-based Question Answering (QA) and Sentiment Analysis (SA). Bilingual training for a language pair X-EN involves pretraining with an English intermediate task along with its translations in X. The NLI, QA and SA tasks require deeper linguistic reasoning (as opposed to sequence labeling tasks like part-of-speech tagging) and exhibit high potential for improvement via transfer learning. (The fact that NLI, QA and SA have more room for improvement compared to POS and NER tagging is evident from the leaderboard statistics in (Khanuja et al., 2020b) .) We present SA results for four different language pairs: Hindi-English (HI-EN), Spanish-English (ES-EN), Tamil-English (TA-EN) and Malayalam-English (ML-EN), and NLI/QA results for HI-EN. 1 Our main findings can be summarized as follows:", "cite_spans": [ { "start": 205, "end": 226, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF17" }, { "start": 231, "end": 259, "text": "XLM-R (Conneau et al., 2020)", "ref_id": null }, { "start": 362, "end": 382, "text": "(Pires et al., 2019;", "ref_id": "BIBREF34" }, { "start": 383, "end": 404, "text": "Conneau et al., 2020;", "ref_id": "BIBREF15" }, { "start": 405, "end": 425, "text": "Wu and Dredze, 2019)", "ref_id": "BIBREF54" }, { "start": 557, "end": 576, "text": "(Phang et al., 2018", "ref_id": "BIBREF33" }, { "start": 1019, "end": 1036, "text": "(Vu et al., 2020)", "ref_id": "BIBREF50" }, { "start": 1837, "end": 1860, "text": "(Khanuja et al., 2020b)", "ref_id": "BIBREF27" }, { "start": 2052, "end": 2053, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Bilingual intermediate-task training consistently yields significant performance improvements on NLI, QA and SA using two different pretrained multilingual models, mBERT and XLM-R. We also show the impact of translation and transliteration quality on this training scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Pretraining using a masked language modeling (MLM) objective on real code-switched text can be used, in conjunction with bilingual training, for additional performance improvements on code-switched target tasks. We also present a code-switched MLM variant that yields larger improvements on SA compared to standard MLM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Intermediate-Task Training. This scheme starts with a publicly-available multilingual model that has been pretrained on large volumes of multilingual text using MLM-based training objectives. This model is subsequently fine-tuned using data from one or more intermediate tasks before finally fine-tuning on code-switched data from the target tasks. Single Intermediate-Task Training makes use of existing monolingual NLI, SA and QA datasets as intermediate-tasks before fine-tuning on the respective code-switched target tasks. For a language pair X-EN, where X \u2208 {ES, HI, TA, ML}, we explored the use of three different intermediate tasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "1. Task-specific data in English (EN SING-TASK): In this setting, we carry out intermediate training using a (relatively) larger English corpus of the same task as our final downstream task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "2. Task-specific data in X (X SING-TASK):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Here, we carry out intermediate training using a corpus of the same task in the matrix language (i.e., not English) present in our code switched corpus. This corpus can be constructed by translating a monolingual English corpus into the target language, and then further transliterating it to be consistent with the Romanized forms present in the target tasks 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "3. Task-specific data in both English and X that we refer to as bilingual intermediatetask training (X-EN SING-TASK): This intermediate-task pretraining method involves creating training batches with an equal number of examples from both languages. We conjecture that interleaving training instances from both languages within a batch encourages the model to simultaneously perform well on both languages, and could subsequently translate to improved performance on 2 Code-switched data for some Indic languages in our target corpora were only available in the Romanized form. Therefore, we only work with Romanized text in all our experiments including intermediate tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "code-switched text in these specific language pairs. This claim is borne out in our experimental results detailed in Section 4. (We also show the importance of mixing instances from both languages rather than adopting a sequential training strategy on instances from both languages in Section 4.2.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Multi Intermediate-Task Training involves two intermediate-tasks (T 1 and T 2 ) simultaneously. This training is done using two different task heads (one per task) with the pretrained models. Each batch is randomly populated with instances from tasks T 1 or T 2 . We follow Raffel et al. (2020) to sample batches from task T 1 with probability P T 1 = min(e T 1 ,K) min(e T 1 ,K)+min(e T 2 ,K) where e T 1 and e T 2 are the number of training examples in task T 1 and T 2 , respectively; P T 2 is similarly computed. The constant K = 2 16 is used to prevent over-sampling. We experiment with NLI and QA as the two intermediate-tasks T 1 or T 2 and refer to this system as HI-EN/NLI-QA MULTI-TASK. We use the merged EN and HI datasets from HI-EN SING-TASK for each task. We also explored MLM training on real code-switched text as one of the tasks, in addition to the merged X-EN task-specific intermediate-tasks (referred to as X-EN/MLM MULTI-TASK).", "cite_spans": [ { "start": 274, "end": 294, "text": "Raffel et al. (2020)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Code-Switched MLM. A common approach to training models for code-switched tasks is to perform additional MLM on real (or synthetic) code-switched text. However, randomly masking from the pool of all tokens in a sentence may not be the most effective use of real code-switched text and differentiating it from monolingual text, especially if one has access to word-level language tags. Given word-level language labels for each token in the code-switched sentences, we aim to emphasize switching via the MLM training objective by masking tokens from words that lie on the switching boundaries. We refer to this training strategy as code-switched MLM. For example, consider the following sentence where tokens that can be masked are enclosed within boxes for both the standard MLM and code-switched MLM strategies, respectively: In the first sentence, tokens from all the words can be masked, as in standard MLM pretraining. In the second sentence that uses code-switched MLM pretraining, only tokens from words at the boundary of a language switch can be masked. To implement this, we need access to annotated language tags for each sentence or a highly accurate language identity detection system. (Neither of these were available for Tamil or Malayalam datasets; hence our results for code-switched MLM are restricted to Hindi and Spanish.) An analysis of the MLM data showed that 45% of all tokens belonged to words on a switching boundary, therefore, the MLM masking probability of these tokens was increased from 0.15 to 0.3 to roughly balance the number of tokens that are masked on average. As the evaluation metric, we use accuracies for NLI and SA over two (entailment/contradiction) and three labels (positive/negative/neutral), respectively, and F1 scores for the QA task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Yeh", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "As intermediate tasks for NLI and QA, we used EN and HI versions of the MultiNLI dataset with 250/10K examples in the train/dev sets and the SQuAD dataset (Rajpurkar et al., 2016) consisting of 82K/5K question-answer pairs in its train/dev sets, respectively. The HI translations for SQuAD (in Devanagari) are available in the XTREME (Hu et al., 2020) benchmark. We used indic-trans (Bhat et al., 2014) to transliterate the HI translations, since NLI and QA in GLUECOS use Romanized HI text. For sentiment analysis in ES-EN and HI-EN, we used the TweetEval (Barbieri et al., 2020) dataset (63K sentences in total) and its translations in ES and HI generated via Mari-anMT 3 (Junczys-Dowmunt et al., 2018) and Indic-Trans MT (Ramesh et al., 2021) , respectively, for intermediate-task training. For TA-EN and ML-EN, we used the positive, negative and neutral labelled sentences from the SST dataset (Socher et al., 2013 ) (100K instances) as the intermediate task. The TA and ML translations were also generated using the IndicTrans MT system. The translations were further transliterated using Bhat et al. (2014) for HI and the Bing Translator API 4 for TA and ML.", "cite_spans": [ { "start": 155, "end": 179, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF38" }, { "start": 334, "end": 351, "text": "(Hu et al., 2020)", "ref_id": "BIBREF21" }, { "start": 383, "end": 402, "text": "(Bhat et al., 2014)", "ref_id": null }, { "start": 557, "end": 580, "text": "(Barbieri et al., 2020)", "ref_id": "BIBREF5" }, { "start": 724, "end": 745, "text": "(Ramesh et al., 2021)", "ref_id": null }, { "start": 898, "end": 918, "text": "(Socher et al., 2013", "ref_id": "BIBREF41" }, { "start": 1094, "end": 1112, "text": "Bhat et al. (2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Intermediate Task Datasets", "sec_num": "3.2" }, { "text": "We use a corpus of 64K real code-switched sentences by pooling together data from prior work Swami et al., 2018; Chandu et al., 2018b) ; we will call this corpus GEN-CS. We supplant this text corpus with an additional 28K code-switched sentences mined from movie scripts (referred to as MOVIE-CS in Tarunesh et al. (2021b)), which is more similar in domain to GLUECOS NLI. We further used code-switched text from Patwa et al. (2020) , Bhat et al. (2017) , and Patro et al. (2017) resulting in a total of 185K HI-EN sentences. For ES-EN, 66K real code-switched sentences were accumulated from prior work (Patwa et al., 2020; Solorio et al., 2014; AlGhamdi et al., 2016; Aguilar et al., 2018; Vilares et al., 2016) . For TA-EN and ML- EN (Chakravarthi et al., 2020b Banerjee et al., 2018; Mandl et al., 2020; Chakravarthi et al., 2020a) , we used roughly 130K and 40K real code-switched sentences, respectively.", "cite_spans": [ { "start": 93, "end": 112, "text": "Swami et al., 2018;", "ref_id": "BIBREF44" }, { "start": 113, "end": 134, "text": "Chandu et al., 2018b)", "ref_id": "BIBREF13" }, { "start": 413, "end": 432, "text": "Patwa et al. (2020)", "ref_id": "BIBREF31" }, { "start": 435, "end": 453, "text": "Bhat et al. (2017)", "ref_id": "BIBREF6" }, { "start": 603, "end": 623, "text": "(Patwa et al., 2020;", "ref_id": "BIBREF31" }, { "start": 624, "end": 645, "text": "Solorio et al., 2014;", "ref_id": "BIBREF42" }, { "start": 646, "end": 668, "text": "AlGhamdi et al., 2016;", "ref_id": "BIBREF1" }, { "start": 669, "end": 690, "text": "Aguilar et al., 2018;", "ref_id": "BIBREF0" }, { "start": 691, "end": 712, "text": "Vilares et al., 2016)", "ref_id": "BIBREF49" }, { "start": 733, "end": 763, "text": "EN (Chakravarthi et al., 2020b", "ref_id": null }, { "start": 764, "end": 786, "text": "Banerjee et al., 2018;", "ref_id": "BIBREF4" }, { "start": 787, "end": 806, "text": "Mandl et al., 2020;", "ref_id": "BIBREF29" }, { "start": 807, "end": 834, "text": "Chakravarthi et al., 2020a)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Masked Language Modelling Datasets", "sec_num": "3.3" }, { "text": "As mentioned previously, for intermediate-task training, we use the MultiNLI and SQuAD v1.1 data from the translate-train sets of the XTREME benchmark these datasets are generated using the indic-trans tool (Bhat et al., 2014) starting from their Devanagari counterparts. For NLI, we directly transliterated the premise and hypothesis. For QA, the context, question and answer were transliterated and the answer span was corrected. This was done by calculating the start and stop indices of the span, followed by a piece-wise transliteration. We finally checked if the context-span matched the answer text. All instances passed this check. To benefit future work in this direction, we provide these transliterated datasets 7 .", "cite_spans": [ { "start": 207, "end": 226, "text": "(Bhat et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "XTREME Translation-Transliteration", "sec_num": "3.4" }, { "text": "mBERT is a transformer model (Vaswani et al., 2017) pretrained using MLM on the Wikipedia corpus of 104 languages. XLM-R uses a similar training objective as mBERT but is trained on orders of magnitude more data from the CommonCrawl corpus spanning 100 languages and yields competitive results on low-resource languages (Conneau et al., 2020) . We use the bert-base-multilingual-cased and xlm-roberta-base models 8 from the Transformers library (Wolf et al., 2019) . We refer readers to Appendix A and Appendix B for more implementation details.", "cite_spans": [ { "start": 29, "end": 51, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF48" }, { "start": 320, "end": 342, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF15" }, { "start": 445, "end": 464, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Model Details", "sec_num": "3.5" }, { "text": "translate-train/squad.translate.train. en-hi.json. 7 https://www.cse.iitb.ac.in/~pjyothi/ CS 8 We also explored a multilingual model IndicBERT (Kakwani et al., 2020) trained exclusively on Indian languages. However, preliminary experiments using this model did not yield satisfactory performance, so we did not pursue it further. In future work, we will aim to use other recently released pretrained models such as MuRIL (Khanuja et al., 2021) . Table 1 shows our main results for SA on ES-EN, HI-EN, TA-EN and ML-EN. We observe that bilingual intermediate-task training, X-EN SING-TASK, outperforms EN SING-TASK and X SING-TASK with both mBERT and XLM-R. The relative improvements of X-EN SING-TASK over the baseline vary across language pairs reaching up to 9.33% for ES. For all language pairs except HI-EN, X-EN/MLM MULTI-TASK is the best-performing system. 9 This demonstrates the benefits of MLM training in conjunction with intermediate-task training. A notable advantage of our bilingual training is that we outperform (or match) previous state-of-the-art with an order of magnitude less data. Our best ES-EN system yields an F1 of 71.7 compared to Pratapa et al. (2018) with an F1 of 64.6. For HI-EN, our best F1 of 72.6 matches the 2 nd -ranked system (Srinivasan, 2020) on SentiMix 2020 (Patwa et al., 2020) . For TA-EN and ML-EN, our best systems match the score of the best TweetEval model in Gupta et al. (2021) (Chakravarthy et al., 2020) 62.41 --- Best results for each model are underlined and the overall best results are in bold. \u2020 Due to dataset changes, we cannot directly cite the results from the paper and report the numbers from the leaderboard after consulting the authors of GLUECOS.", "cite_spans": [ { "start": 93, "end": 94, "text": "8", "ref_id": null }, { "start": 421, "end": 443, "text": "(Khanuja et al., 2021)", "ref_id": null }, { "start": 1298, "end": 1318, "text": "(Patwa et al., 2020)", "ref_id": "BIBREF31" }, { "start": 1406, "end": 1425, "text": "Gupta et al. (2021)", "ref_id": "BIBREF18" }, { "start": 1426, "end": 1453, "text": "(Chakravarthy et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 446, "end": 453, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Model Details", "sec_num": "3.5" }, { "text": "in Table 1 ) provides additional performance gains for ES-EN and HI-EN. We do not report codeswitched MLM results for TA-EN and ML-EN since we do not have access to language labels or a trained language identification system for either language. The ES-EN MLM dataset contains several sentences with no switching which are discarded for both standard and code-switched MLM. In Table 1 , we compare +MLM and CODE-SWITCHED MLM only using sentences that contain code-switching. 10 With access to translation and transliteration tools for a target language, we show superior results on four different language pairs for the sentiment analysis task. Even in resource-constrained settings like TA-EN and ML-EN, we obtain state-of-the-art performance using our proposed techniques. In Section 4.3, we will examine the influence of translation and transliteration quality on performance. formance between sequentially training on English followed by Hindi versus mixing instances from both languages as in HI-EN SING-TASK. We observe a clear deterioration in performance with sequential training, with the latter performing even worse than its monolingual counterparts (EN SING-TASK and HI SING-TASK). This confirms that bilingual training is essential to improved performance on code-switched tasks.", "cite_spans": [ { "start": 475, "end": 477, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 377, "end": 384, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results on Sentiment Analysis", "sec_num": "4.1" }, { "text": "NLI/QA MULTI TASK Results. Table 2 shows that the MULTI-TASK systems yield additional gains over the SING-TASK systems. Using both NLI and QA as intermediate tasks benefits both NLI and QA for mBERT and QA for XLM-R, and corroborates observations in prior work (Tarunesh et al., 2021a; . Although intermediate-task training is beneficial across tasks, the relative improvements in QA are higher than that for NLI (see Appendix C for some QA examples). We conjecture this is due to varying dataset similarity between intermediate-tasks and target tasks (Vu et al., 2020) . In QA, this similarity is higher and in NLI the conversational nature and large premise lengths reduces this similarity. The effect of domain similarity is more pronounced with MLM training resulting in variations between absolute 1.5-2%. More experiments detailing when MLM training benefits the downstream tasks is described in Section 4.4.", "cite_spans": [ { "start": 261, "end": 285, "text": "(Tarunesh et al., 2021a;", "ref_id": null }, { "start": 552, "end": 569, "text": "(Vu et al., 2020)", "ref_id": "BIBREF50" } ], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results on Sentiment Analysis", "sec_num": "4.1" }, { "text": "Transliteration and translation are the two key preprocessing steps employed for bilingual pretraining. Since we make use of existing translation and transliteration tools that are not error-free, it is useful to understand the impact of such translation and transliteration tools on final downstream task performance. To assess the impact of both translation and transliteration quality on NLI and QA performance, we use two small datasets XNLI (Conneau et al., 2018) and XQuAD (Artetxe et al., 2020 ) for which we have manual HI (Devanagari) translations. We combined the test and dev sets of XNLI to get the data for intermediate-task training. We discarded all examples labelled neutral and instances where the crowdsourced annotations did not match the designated labels 12 . After this, we were left with roughly 4.2K/0.5K instances in the train/dev sets, respectively (the dev set is used for early stopping during intermediate-task training). For XNLI, the premises and hypotheses were directly translated and for XQuAD we adopted the same translation procedure listed in Hu et al. (2020) .", "cite_spans": [ { "start": 446, "end": 468, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF16" }, { "start": 479, "end": 500, "text": "(Artetxe et al., 2020", "ref_id": "BIBREF2" }, { "start": 1080, "end": 1096, "text": "Hu et al. (2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Influence of Translation and Transliteration Quality", "sec_num": "4.3" }, { "text": "In Table 4 , we compare the performance of HI-EN SING-TASK using manual translations with translations from the Google Translate API 13 , and also transliterations from this API with those from indictrans. As expected, using manual translations is most beneficial to the downstream task. The use of Google Translate, however, does not significantly hamper performance. Similar to the results in Table 4 for bilingual intermediate-task training, we present a similar analysis in Table 5 when using task-specific data in HI SING-TASK with mBERT and observe the same trends. Keeping the translation method fixed as manual, we tried using indictrans for transliteration instead of the Google API. We see this led to a decrease in performance in all the 4 cases (i.e., across two models and two tasks in Tables 4 and 5) Table 6 shows the impact of transliteration on sentiment analysis of TA-EN and ML-EN. Again, we see that using an improved transliteration tool led to improved performance across both Tamil and Malayalam. Figure 1 illustrates different MA transliterations. From the figure, we notice that indic-trans tends to retain some Malayalam characters in its native script (possibly due to incomplete Unicode support) and also does not produce very accurate transliterations. Transliterations from the Bing API are more phonetically accurate. Table 7 shows an example from the HI-EN NLI dataset, that is translated and transliterated using Google Translate and indic-trans. The color-coded transliterations indicate that indic-trans often uses existing English words as transliterations. While this is helpful for some specific (uncommon) words, in most cases it leads to ambiguity in sentence meaning (shown in blue). Further, these ambiguous words are far more common in the HI language, and thus have a greater impact on model performance. In summary, developing more accurate tools for translation and transliteration would be very benebased transliterations for the large intermediate task datasets. ficial for downstream code-switched tasks.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 478, "end": 485, "text": "Table 5", "ref_id": "TABREF10" }, { "start": 799, "end": 814, "text": "Tables 4 and 5)", "ref_id": "TABREF8" }, { "start": 815, "end": 822, "text": "Table 6", "ref_id": "TABREF12" }, { "start": 1020, "end": 1028, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1349, "end": 1356, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Influence of Translation and Transliteration Quality", "sec_num": "4.3" }, { "text": "How does MLM pretraining in conjunction with intermediate-task training impact performance? What is the influence of changing the MLM corpus (and hence its domain) on final task performance?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MLM and Intermediate-Task Training", "sec_num": "4.4" }, { "text": "We address these questions in this section by focusing on NLI and QA for HI-EN using mBERT. Table 8 provides a summary of our experiments on intermediate-task training of mBERT using only English (EN) and both English and Hindi (HI-EN) in conjunction with MLM in the MULTI-TASK setting described in Section 2. From Table 8 , we observe that intermediate training using MLM on code-switched data alone (i.e., the first row for each task) is not as effective as using both MLM and intermediate-task pretraining. NLI benefits from MLM in a multi-task setup in both monolingual and bilingual settings. Further, we note that adding in-domain MOVIE-CS data yields additional improvements for NLI. This shows that sufficient amount of in-domain data is needed for performance gains, and augmenting out-of-domain with in-domain code-switched text can be effective. In the case of QA, MLM does not improve performance in the monolingual setting, although the mean scores are statistically close. In the bilingual setting, we see a clear improvement using GEN-CS for MLM training. However, using both GEN-CS and MOVIE-CS for MLM results in significant degradation of performance. We believe that this is due to the domain of the passages in GLUECOS QA being similar to the HI-EN blog data present in GEN-CS. However, the MOVIE-CS dataset comes from a significantly different domain and thus hurts performance. This indicates that in addition to the amount of unlabelled real code-switched text, when using MLM training, the domain of the text is very influential in determining the performance on downstream tasks (Gururangan et al., 2020). For both NLI and QA, we observe the following common trend: Adding code-switched data from the training set of GLUECOS tasks (referred to as GLUECOS NLI CS and GLUECOS QA CS) degrades performance. This could be due to the quality of training data in the GLUECOS tasks. Each dialogue in the NLI data does not have a lot of content and is highly conversational in nature. In addition to this, the dataset is also very noisy. For example, a word 'humko' is split into its characters 'h u m k o'. Thus, Table 7 : NLI examples from some of our datasets. \u2020 : obtained by translation of the second row using Google Translate API. : transliterated using Google Translate API, : transliterated using indic-trans (Bhat et al., 2014) .", "cite_spans": [ { "start": 2334, "end": 2353, "text": "(Bhat et al., 2014)", "ref_id": null } ], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 8", "ref_id": "TABREF15" }, { "start": 315, "end": 322, "text": "Table 8", "ref_id": "TABREF15" }, { "start": 2130, "end": 2137, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "MLM and Intermediate-Task Training", "sec_num": "4.4" }, { "text": "In blue, we show some of the words with ambiguous transliterations by indic-trans. In purple, we show some words that are better transliterated by indic-trans. Best viewed in color. MLM on such data may not be very effective and could hurt performance. For QA, passages in significant portions of the train set are obtained using DrQA -Document Retriever module 15 (Chen et al., 2017) . These passages are monolingual in nature and thus potentially not useful for MLM training with code-switched text.", "cite_spans": [ { "start": 365, "end": 384, "text": "(Chen et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "MLM and Intermediate-Task Training", "sec_num": "4.4" }, { "text": "While pretrained multilingual models are being increasingly used for cross-lingual natural language understanding tasks, their effectiveness for codeswitched tasks has not been thoroughly explored. Winata et al. (2021) show that embeddings from pretrained multilingual models are not very ef-fective for code-switched tasks and more work is needed to effectively adapt them. Intermediate task-training has proven to be effective for many NLP target tasks Vu et al., 2020) , as well as crosslingual zero-shot transfer from English tasks on multilingual models such as XLM-R and mBERT (Tarunesh et al., 2021a) . Ours is the first work to show improved intermediate tasktraining strategies for code-switched target tasks. Pires et al. (2019) and Hsu et al. (2019) showed that mBERT is effective for HI-EN part-of-speech tagging and a reading comprehension task on synthetic code-switched data, respectively. This was extended for a variety of code-switched tasks by Khanuja et al. (2020b) , where they showed improvements on several tasks using MLM pretraining on real and synthetic code-switched text. Chakravarthy et al. (2020) further improved the NLI performance of mBERT by including large amounts of in-domain code-switched text during MLM pretraining. Gururangan et al. (2020) empirically demonstrate that pretraining is most beneficial when the domains of the intermediate and target tasks are similar, which we observe as well. Differing from their recommendation of domain adaptive pretraining using MLM on large quantities of real code-switched data, we find intermediate-task training using significantly smaller amounts of labeled data to be more consistently beneficial across tasks and languages. In contrast to very recent work (Gupta et al., 2021) that reports results using a Robertabased model trained exclusively for sentiment analysis and pretrained on 60M English tweets, we present a bilingual training technique that is consistently effective across tasks and languages while requiring significantly smaller amounts of data. Instead of using mBERT and XLM-R that are very broad in their coverage of languages, it would be interesting to examine whether our observed trends hold when using pretrained models specifically trained for the chosen target languages. We could consider using very recent models like IndicBERT (Kakwani et al., 2020) and MuRIL (Khanuja et al., 2021) that are trained exclusively on Indian languages and have been shown to outperform mBERT on cross-lingual tasks (e.g., XTREME) and tasks like IndicGLUE, respectively. We leave this investigation for future work.", "cite_spans": [ { "start": 198, "end": 218, "text": "Winata et al. (2021)", "ref_id": "BIBREF52" }, { "start": 455, "end": 471, "text": "Vu et al., 2020)", "ref_id": "BIBREF50" }, { "start": 583, "end": 607, "text": "(Tarunesh et al., 2021a)", "ref_id": null }, { "start": 719, "end": 738, "text": "Pires et al. (2019)", "ref_id": "BIBREF34" }, { "start": 743, "end": 760, "text": "Hsu et al. (2019)", "ref_id": "BIBREF20" }, { "start": 963, "end": 985, "text": "Khanuja et al. (2020b)", "ref_id": "BIBREF27" }, { "start": 1100, "end": 1126, "text": "Chakravarthy et al. (2020)", "ref_id": "BIBREF11" }, { "start": 1256, "end": 1280, "text": "Gururangan et al. (2020)", "ref_id": "BIBREF19" }, { "start": 1741, "end": 1761, "text": "(Gupta et al., 2021)", "ref_id": "BIBREF18" }, { "start": 2340, "end": 2362, "text": "(Kakwani et al., 2020)", "ref_id": null }, { "start": 2373, "end": 2395, "text": "(Khanuja et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "This is the first work to demonstrate the effectiveness of intermediate-task training for codeswitched NLI, QA and SA on different language pairs, and present code-switched MLM that consistently benefits SA more than standard MLM. We also carry out ablations of transliteration systems and compare their performance across the same corpora translated using different techniques. We observe that high-quality translations and transliterations are important to derive performance improvements on downstream tasks. For future work, we plan to continue exploring pretraining strategies, based on more informed masking objectives and task-adaptive techniques. One key limitation of the newly introduced codeswitched MLM approach is the requirement of LID systems for the languages under consideration. Future work can focus on mitigating this requirement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In all experiments, we have used the AdamW algorithm (Loshchilov and Hutter, 2019 ) and a linear scheduler with warm up for the learning rate. These experiments were run on a single NVIDIA GeForce GTX 1080 Ti GPU. Some crucial fixed hyperparameters are: learning_rate = 5e-5, adam_epsilon = 1e-8, max_gradient_norm = 1, and gradient_accumulation_steps = 10.", "cite_spans": [ { "start": 53, "end": 81, "text": "(Loshchilov and Hutter, 2019", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "B Hyperparameter Tuning", "sec_num": null }, { "text": "The training for all the main intermediate-task experiments was carried out for 4 epochs and the model with the highest performance metric on the task dev set was considered (all the metrics stagnated after a certain point in training). For NLI + QA tasks, two separate models were stored depending on the performance metric on the respective dev set. No hyperparameter search was conducted at this stage. During bilingual training, the batches were interspersed-equal number of examples from English and Romanized HI within each batch. In the single-task systems, we used batch_size = 8 and max_sequence_length = 128 for NLI, batch_size = 8 and max_sequence_length = 256 for SA, batch_size = 4 and max_sequence_length = 512 for QA. During multi-task training, the max_sequence_length was set to the maximum of the aforementioned numbers and the respective batch-sizes. Any multi-task training technique requires at least 14-15 hours for validation accuracy to stagnate. Single task intermediate training requires 4-5 hours for monolingual versions and 8-9 hours for the bilingual version. SA data being smaller in size requires 8-9 hours for multitask, 4-5 hours for bilingual intermediate task and 1-2 hours for monolingual intermediate task. The logging_steps are set to approximately 10% of the total steps in an epoch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Intermediate-Task Training", "sec_num": null }, { "text": "The base fine-tuning files have been taken from the GLUECOS repository 16 . Given that there no dev sets in GLUECOS, and that the tasks are low-resource, we use train accuracy in NLI and train loss in QA as an indication to stop fine-tuning. Manual search is performed over a range of epochs to obtain the best test performance. For NLI, we stopped finetuning when training accuracy is in the range of 70-80% (which meant fine-tuning for 1-4 epochs depending upon the model and technique used). For QA, we stopped when training loss reached \u223c 0.1. Thus, we explored 3-5 epochs for mBERT and 4-8 epochs for XLM-R. We present the statistics over the best results on 5 different seeds. We used batch_size = 8 and max_sequence_length = 256 for GLUECOS NLI 17 and batch_size = 4 and max_sequence_length = 512 for GLUECOS QA. All our fine-tuning runs on GLUECOS take an average of 1 minute per epoch.", "cite_spans": [ { "start": 71, "end": 73, "text": "16", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B.2 Fine-tuning on GLUECOS NLI & QA Tasks", "sec_num": null }, { "text": "The dev set, being available for all language pairs was used to find the checkpoint with best F1 score, and this model was used for evaluation on the test set. The mean values were presented after carrying out the above procedure for 6 different seeds. The logging_steps are set to approximately 10% of the total steps in an epoch. Each epoch takes around 1 minute for TA, MA and ES, 2 minutes for HI (SemEval).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Fine-tuning on downstream SA tasks", "sec_num": null }, { "text": "In Table 9 , we show some instances from the HI-EN QA dataset. The color-coded transliterations indicate that indic-trans often uses existing English words as transliterations. While for some specific (uncommon) words that is helpful, in most cases it leads to ambiguity in the sentence meaning (shown in blue). Further, these ambiguous words (in blue) are far more common in the HI language, and thus, have a greater impact on model performance. We also note that transliterations of these common words in the GLUECOS dataset matches closely with the transliterations produced using the HI (Google ) unake sthaaneey pratidvandviyon, poloniya vaaraso, ke paas kaaphee kam samarthak hain, phir bhee ve 2000 mein ekalastralaasa chaimpiyanaship jeetane mein kaamayaab rahe. unhonne 1946 mein raashtriy chaimpiyanaship bhee jeetee, aur saath hee do baar kap jeete. poloniya ka ghar konaveektarsaka street par sthit hai, jo old taun se uttar mein das minat kee paidal dooree par hai. poloniya ko 2013 mein unakee kharaab vitteey sthiti kee vajah se desh kee sheersh udaan se hata diya gaya tha. ve ab botam profeshanal leeg ke 4th leeg (polaind mein 5 ven star) neshanal polish futabol esosieshan sanrachana mein khel rahe hain.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "C Example Outputs", "sec_num": null }, { "text": "XQuAD HI (indic ) unke sthaneey pratidwandviyon, polonia warsaw, ke paas kaaphi kam samarthak hai, phir bhi ve 2000 main ecrestlasa championships jeetne main kaamyaab rahe. unhone 1946 main rashtri championships bhi jiti, or saath hi do baar cap jite. polonia kaa ghar konwictarska street par sthit he, jo old toun se uttar main das minute kii paidal duuri par he. polonia ko 2013 main unki karaab vittiya sthiti kii vajah se desh kii sheersh udaan se hataa diya gaya tha. ve ab bottm profeshnal lig ke 4th lig (poland main 5 wein str) neshnal polish footbaal association sanrachana main khel rahe hai. Table 9 : QA examples from some of our datasets. \u2020 : obtained by translation of the second row using Google Translate API. : transliterated using Google Translate API, : transliterated using indic-trans (Bhat et al., 2014) .", "cite_spans": [ { "start": 806, "end": 825, "text": "(Bhat et al., 2014)", "ref_id": null } ], "ref_spans": [ { "start": 603, "end": 610, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "C Example Outputs", "sec_num": null }, { "text": "In blue, we show some of the words words with ambiguous transliteration by indic-trans and their counterparts. In purple, we show some words that are better transliterated by indic-trans. Best viewed in color. ES la historia y la amistad proceden de tal manera que est\u00e1s viendo una telenovela en lugar de una cr\u00f3nica de los altibajos que acompa\u00f1an a las amistades de toda la vida. Table 10 : Sentiment analysis examples from our datasets. \u2020 : obtained by translation of the corresponding EN sentence using IndicTrans MT (Ramesh et al., 2021) . \u2021 : obtained by translation of the corresponding EN sentence using MarianMT. : transliterated using Bing Translator API.", "cite_spans": [ { "start": 520, "end": 541, "text": "(Ramesh et al., 2021)", "ref_id": null } ], "ref_spans": [ { "start": 381, "end": 389, "text": "Table 10", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "XQuAD", "sec_num": null }, { "text": "Google Translate API. Further, there is not a lot of difference between the machine and human translations, which might be due to translation bias. Table 10 shows examples from the sentiment analysis datasets in HI-EN, ES-EN, TA-EN and ML-EN.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Table 10", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "XQuAD", "sec_num": null }, { "text": "These tasks present an additional challenge with the Indian languages written using transliterated/Romanized text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Implementation used: http://bit.ly/MarianMT 4 http://bit.ly/azureTranslate 5 MultiNLI available at: https://storage.cloud. google.com/xtreme_translations/XNLI/ translate-train/en-hi-translated.tsv 6 SQuAD available at: https://storage.cloud. google.com/xtreme_translations/SQuAD/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We hypothesize the drop in performance for HI-EN could be attributed to domain differences between the SA and MLM corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "On using the complete En-Es MLM corpus for +MLM, we obtained an F1 of 62.57 using mBERT and 67.6 using XLM-R on the SA test set of ES-EN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Like Chakravarthy et al. (2020), we also find that XLM-R baseline/+MLM on GLUECOS NLI does not converge and hence we do not report these scores inTable 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This was achieved via the match Boolean attribute(Conneau et al., 2018) 13 https://cloud.google.com/translate 14 We did not switch to Google Translate for all our main experiments due to the overhead of obtaining Google Translate-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/facebookresearch/ DrQA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/microsoft/GLUECoS 17 The sequence length was doubled as compared to the intermediate-task training to incorporate the long premise length of GLUECOS NLI. This resulted in higher accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers for their useful and constructive comments that helped improve the draft.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" }, { "text": "The mBERT model comprises 179M parameters with the MLM head comprising 712K parameters. The XLM-R model comprises 270M parameters with an MLM head with 842k parameters. For both models, the NLI (sequence classification) and QA heads comprise 1536 parameters each. For SA (sequence classification) the head comprises of 2304 parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Implementation Details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching", "authors": [ { "first": "Gustavo", "middle": [], "last": "Aguilar", "suffix": "" }, { "first": "Fahad", "middle": [], "last": "Alghamdi", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Soto", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Thamar Solorio, Mona Diab, and Julia Hirschberg, edi- tors. 2018. Proceedings of the Third Workshop on Com- putational Approaches to Linguistic Code-Switching. Association for Computational Linguistics, Melbourne, Australia.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Part of speech tagging for code switched data", "authors": [ { "first": "Fahad", "middle": [], "last": "Alghamdi", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Molina", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "" }, { "first": "Abdelati", "middle": [], "last": "Hawwari", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Soto", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching", "volume": "", "issue": "", "pages": "98--107", "other_ids": { "DOI": [ "10.18653/v1/W16-5812" ] }, "num": null, "urls": [], "raw_text": "Fahad AlGhamdi, Giovanni Molina, Mona Diab, Thamar Solorio, Abdelati Hawwari, Victor Soto, and Julia Hirschberg. 2016. Part of speech tagging for code switched data. In Proceedings of the Second Work- shop on Computational Approaches to Code Switching, pages 98-107, Austin, Texas. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.421" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4623--4637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A dataset for building codemixed goal oriented conversation systems", "authors": [ { "first": "Suman", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Moghe", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Mitesh", "middle": [ "M" ], "last": "Khapra", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3766--3780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suman Banerjee, Nikita Moghe, Siddhartha Arora, and Mitesh M. Khapra. 2018. A dataset for building code- mixed goal oriented conversation systems. In Proceed- ings of the 27th International Conference on Compu- tational Linguistics, pages 3766-3780, Santa Fe, New Mexico, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TweetEval: Unified benchmark and comparative evaluation for tweet classification", "authors": [ { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Espinosa Anke", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Neves", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1644--1650", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.148" ] }, "num": null, "urls": [], "raw_text": "Francesco Barbieri, Jose Camacho-Collados, Luis Es- pinosa Anke, and Leonardo Neves. 2020. TweetE- val: Unified benchmark and comparative evaluation for tweet classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1644- 1650, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data", "authors": [ { "first": "Irshad", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "A", "middle": [], "last": "Riyaz", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Dipti", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "324--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2017. Joining hands: Exploiting mono- lingual treebanks for parsing of code-mixing data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguis- tics: Volume 2, Short Papers, pages 324-330, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "System Submission for FIRE2014 Shared Task on Transliterated Search", "authors": [ { "first": "", "middle": [], "last": "Iiit-H", "suffix": "" } ], "year": null, "venue": "Proceedings of the Forum for Information Retrieval Evaluation", "volume": "", "issue": "", "pages": "48--53", "other_ids": { "DOI": [ "10.1145/2824864.2824872" ] }, "num": null, "urls": [], "raw_text": "IIIT-H System Submission for FIRE2014 Shared Task on Transliterated Search. In Proceedings of the Forum for Information Retrieval Evaluation, pages 48- 53. ACM.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A sentiment analysis dataset for codemixed Malayalam-English", "authors": [ { "first": "Navya", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Shardul", "middle": [], "last": "Jose", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Suryawanshi", "suffix": "" }, { "first": "John", "middle": [ "Philip" ], "last": "Sherly", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Crae", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc- Crae. 2020a. A sentiment analysis dataset for code- mixed Malayalam-English. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collabora- tion and Computing for Under-Resourced Languages (CCURL), pages 177-184, Marseille, France. European Language Resources association.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Corpus creation for sentiment analysis in code-mixed Tamil-English text", "authors": [ { "first": "Vigneshwaran", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Ruba", "middle": [], "last": "Muralidaran", "suffix": "" }, { "first": "John", "middle": [ "Philip" ], "last": "Priyadharshini", "suffix": "" }, { "first": "", "middle": [], "last": "Mccrae", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "202--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip McCrae. 2020b. Corpus creation for sentiment analysis in code-mixed Tamil-English text. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collabora- tion and Computing for Under-Resourced Languages (CCURL), pages 202-210, Marseille, France. European Language Resources association.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Findings of the shared task on offensive language identification in Tamil, Malayalam, and Kannada", "authors": [ { "first": "Ruba", "middle": [], "last": "Bharathi Raja Chakravarthi", "suffix": "" }, { "first": "Navya", "middle": [], "last": "Priyadharshini", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Jose", "suffix": "" }, { "first": "M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Kumar Kumaresan", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Ponnusamy", "suffix": "" }, { "first": "R L", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "John", "middle": [ "P" ], "last": "Mccrae", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Sherly", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "133--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Navya Jose, Anand Kumar M, Thomas Mandl, Prasanna Kumar Kumaresan, Rahul Ponnusamy, Hari- haran R L, John P. McCrae, and Elizabeth Sherly. 2021. Findings of the shared task on offensive language iden- tification in Tamil, Malayalam, and Kannada. In Pro- ceedings of the First Workshop on Speech and Lan- guage Technologies for Dravidian Languages, pages 133-145, Kyiv. Association for Computational Linguis- tics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Detecting entailment in codemixed Hindi-English conversations", "authors": [ { "first": "Sharanya", "middle": [], "last": "Chakravarthy", "suffix": "" }, { "first": "Anjana", "middle": [], "last": "Umapathy", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)", "volume": "", "issue": "", "pages": "165--170", "other_ids": { "DOI": [ "10.18653/v1/2020.wnut-1.22" ] }, "num": null, "urls": [], "raw_text": "Sharanya Chakravarthy, Anjana Umapathy, and Alan W Black. 2020. Detecting entailment in code- mixed Hindi-English conversations. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 165-170, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Codemixed question answering challenge: Crowd-sourcing data and techniques", "authors": [ { "first": "Khyathi", "middle": [], "last": "Chandu", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Loginova", "suffix": "" }, { "first": "Vishal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "G\u00fcnter", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Manoj", "middle": [], "last": "Chinnakotla", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching", "volume": "", "issue": "", "pages": "29--38", "other_ids": { "DOI": [ "10.18653/v1/W18-3204" ] }, "num": null, "urls": [], "raw_text": "Khyathi Chandu, Ekaterina Loginova, Vishal Gupta, Josef van Genabith, G\u00fcnter Neumann, Manoj Chin- nakotla, Eric Nyberg, and Alan W. Black. 2018a. Code- mixed question answering challenge: Crowd-sourcing data and techniques. In Proceedings of the Third Work- shop on Computational Approaches to Linguistic Code- Switching, pages 29-38, Melbourne, Australia. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Language informed modeling of code-switched text", "authors": [ { "first": "Khyathi", "middle": [], "last": "Chandu", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Manzini", "suffix": "" }, { "first": "Sumeet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching", "volume": "", "issue": "", "pages": "92--97", "other_ids": { "DOI": [ "10.18653/v1/W18-3211" ] }, "num": null, "urls": [], "raw_text": "Khyathi Chandu, Thomas Manzini, Sumeet Singh, and Alan W. Black. 2018b. Language informed modeling of code-switched text. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 92-97, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Reading Wikipedia to answer opendomain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1870--1879", "other_ids": { "DOI": [ "10.18653/v1/P17-1171" ] }, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1870-1879, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised crosslingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross- lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 8440-8451, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "XNLI: Evaluating crosslingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186, Minneapolis, Minnesota. Association for Com- putational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Task-specific pre-training and cross lingual transfer for sentiment analysis in Dravidian codeswitched languages", "authors": [ { "first": "Akshat", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Krishna", "middle": [], "last": "Sai", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Rallabandi", "suffix": "" }, { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", "volume": "", "issue": "", "pages": "73--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akshat Gupta, Sai Krishna Rallabandi, and Alan W Black. 2021. Task-specific pre-training and cross lin- gual transfer for sentiment analysis in Dravidian code- switched languages. In Proceedings of the First Work- shop on Speech and Language Technologies for Dra- vidian Languages, pages 73-79, Kyiv. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8342--8360", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.740" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Zero-shot reading comprehension by crosslingual transfer learning with multi-lingual language representation model", "authors": [ { "first": "Tsung-Yuan", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Chi-Liang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5933--5940", "other_ids": { "DOI": [ "10.18653/v1/D19-1607" ] }, "num": null, "urls": [], "raw_text": "Tsung-Yuan Hsu, Chi-Liang Liu, and Hung-yi Lee. 2019. Zero-shot reading comprehension by cross- lingual transfer learning with multi-lingual language representation model. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5933-5940, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "4411--4421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi-task bench- mark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, pages 4411-4421, Virtual. PMLR.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Marian: Fast neural machine translation in C++", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Dwojak", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Neckermann", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Alham", "middle": [], "last": "Fikri Aji", "suffix": "" } ], "year": 2018, "venue": "Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch", "volume": "", "issue": "", "pages": "116--121", "other_ids": { "DOI": [ "10.18653/v1/P18-4020" ] }, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Al- ham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Mar- tins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116-121, Mel- bourne, Australia. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pretrained multilingual language models for Indian languages", "authors": [ { "first": "Pratyush", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4948--4961", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.445" ] }, "num": null, "urls": [], "raw_text": "Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre- trained multilingual language models for Indian lan- guages. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 4948-4961, Online. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Subhash Chandra Bose Gali, Vish Subramanian, and Partha P. Talukdar. 2021. Muril: Multilingual representations for indian languages. CoRR", "authors": [ { "first": "Simran", "middle": [], "last": "Khanuja", "suffix": "" }, { "first": "Diksha", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Sarvesh", "middle": [], "last": "Mehtani", "suffix": "" }, { "first": "Savya", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "Atreyee", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Balaji", "middle": [], "last": "Gopalan", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Kumar Margam", "suffix": "" }, { "first": "Pooja", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Teja Nagipogu", "suffix": "" }, { "first": "Shachi", "middle": [], "last": "Dave", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Gupta", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Ku- mar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, and Partha P. Talukdar. 2021. Muril: Multilingual representations for indian lan- guages. CoRR, abs/2103.10730.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A new dataset for natural language inference from code-mixed conversations", "authors": [ { "first": "Simran", "middle": [], "last": "Khanuja", "suffix": "" }, { "first": "Sandipan", "middle": [], "last": "Dandapat", "suffix": "" }, { "first": "Sunayana", "middle": [], "last": "Sitaram", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the The 4th Workshop on Computational Approaches to Code Switching", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simran Khanuja, Sandipan Dandapat, Sunayana Sitaram, and Monojit Choudhury. 2020a. A new dataset for natural language inference from code-mixed conversations. In Proceedings of the The 4th Work- shop on Computational Approaches to Code Switching, pages 9-16, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "GLUECoS: An evaluation benchmark for codeswitched NLP", "authors": [ { "first": "Simran", "middle": [], "last": "Khanuja", "suffix": "" }, { "first": "Sandipan", "middle": [], "last": "Dandapat", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Sunayana", "middle": [], "last": "Sitaram", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3575--3585", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.329" ] }, "num": null, "urls": [], "raw_text": "Simran Khanuja, Sandipan Dandapat, Anirudh Srini- vasan, Sunayana Sitaram, and Monojit Choudhury. 2020b. GLUECoS: An evaluation benchmark for code- switched NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguis- tics, pages 3575-3585, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Con- ference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german", "authors": [ { "first": "Thomas", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Sandip", "middle": [], "last": "Modha", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Bharathi Raja", "middle": [], "last": "Chakravarthi", "suffix": "" } ], "year": 2020, "venue": "In Forum for Information Retrieval Evaluation", "volume": "", "issue": "", "pages": "29--32", "other_ids": { "DOI": [ "10.1145/3441501.3441517" ] }, "num": null, "urls": [], "raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the hasoc track at fire 2020: Hate speech and offensive lan- guage identification in tamil, malayalam, hindi, english and german. In Forum for Information Retrieval Eval- uation, page 29-32.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "All that is English may be Hindi: Enhancing language identification through automatic ranking of the likeliness of word borrowing in social media", "authors": [ { "first": "Jasabanta", "middle": [], "last": "Patro", "suffix": "" }, { "first": "Bidisha", "middle": [], "last": "Samanta", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Abhipsa", "middle": [], "last": "Basu", "suffix": "" }, { "first": "Prithwish", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Animesh", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2264--2274", "other_ids": { "DOI": [ "10.18653/v1/D17-1240" ] }, "num": null, "urls": [], "raw_text": "Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Ab- hipsa Basu, Prithwish Mukherjee, Monojit Choudhury, and Animesh Mukherjee. 2017. All that is English may be Hindi: Enhancing language identification through automatic ranking of the likeliness of word borrowing in social media. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 2264-2274, Copenhagen, Denmark. As- sociation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SemEval-2020 task 9: Overview of sentiment analysis of code-mixed tweets", "authors": [ { "first": "Parth", "middle": [], "last": "Patwa", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Aguilar", "suffix": "" }, { "first": "Sudipta", "middle": [], "last": "Kar", "suffix": "" }, { "first": "Suraj", "middle": [], "last": "Pandey", "suffix": "" }, { "first": "Pykl", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" }, { "first": "Tanmoy", "middle": [], "last": "Chakraborty", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "774--790", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Bj\u00f6rn Gamb\u00e4ck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das. 2020. SemEval-2020 task 9: Overview of sentiment analy- sis of code-mixed tweets. In Proceedings of the Four- teenth Workshop on Semantic Evaluation, pages 774- 790, Barcelona (online). International Committee for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "English intermediatetask training improves zero-shot cross-lingual transfer too", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Iacer", "middle": [], "last": "Calixto", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Kann", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "557--575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruk- sachatkun, Haokun Liu, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. English intermediate- task training improves zero-shot cross-lingual transfer too. In Proceedings of the 1st Conference of the Asia- Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Confer- ence on Natural Language Processing, pages 557-575, Suzhou, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "F\u00e9vry", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01088" ] }, "num": null, "urls": [], "raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R Bow- man. 2018. Sentence encoders on stilts: Supplemen- tary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Word embeddings for code-mixed language processing", "authors": [ { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Sunayana", "middle": [], "last": "Sitaram", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3067--3072", "other_ids": { "DOI": [ "10.18653/v1/D18-1344" ] }, "num": null, "urls": [], "raw_text": "Adithya Pratapa, Monojit Choudhury, and Sunayana Sitaram. 2018. Word embeddings for code-mixed lan- guage processing. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 3067-3072, Brussels, Belgium. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Intermediate-task transfer learning with pretrained language models: When and why does it work?", "authors": [ { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaoyi", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Richard", "middle": [ "Yuanzhe" ], "last": "Zhang", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Kann", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5231--5247", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.467" ] }, "num": null, "urls": [], "raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pre- trained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 5231- 5247, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages", "authors": [ { "first": "Gowtham", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Sumanth", "middle": [], "last": "Doddapaneni", "suffix": "" }, { "first": "Aravinth", "middle": [], "last": "Bheemaraj", "suffix": "" }, { "first": "Mayank", "middle": [], "last": "Jobanputra", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Raghavan", "suffix": "" }, { "first": "Ajitesh", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Sujit", "middle": [], "last": "Sahoo", "suffix": "" }, { "first": "Harshita", "middle": [], "last": "Diddee", "suffix": "" }, { "first": "J", "middle": [], "last": "Mahalakshmi", "suffix": "" }, { "first": "Divyanshu", "middle": [], "last": "Kakwani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Ma- halakshmi J, Divyanshu Kakwani, Navneet Ku- mar, Aswin Pradeep, Kumar Deepak, Vivek Ragha- van, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A Twitter corpus for Hindi-English code mixed POS tagging", "authors": [ { "first": "Kushagra", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Indira", "middle": [], "last": "Sen", "suffix": "" }, { "first": "Ponnurangam", "middle": [], "last": "Kumaraguru", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "12--17", "other_ids": { "DOI": [ "10.18653/v1/W18-3503" ] }, "num": null, "urls": [], "raw_text": "Kushagra Singh, Indira Sen, and Ponnurangam Ku- maraguru. 2018. A Twitter corpus for Hindi-English code mixed POS tagging. In Proceedings of the Sixth International Workshop on Natural Language Process- ing for Social Media, pages 12-17, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631- 1642, Seattle, Washington, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Overview for the first shared task on language identification in codeswitched data", "authors": [ { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Blair", "suffix": "" }, { "first": "Suraj", "middle": [], "last": "Maharjan", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Mahmoud", "middle": [], "last": "Ghoneim", "suffix": "" }, { "first": "Abdelati", "middle": [], "last": "Hawwari", "suffix": "" }, { "first": "Fahad", "middle": [], "last": "Alghamdi", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" }, { "first": "Alison", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching", "volume": "", "issue": "", "pages": "62--72", "other_ids": { "DOI": [ "10.3115/v1/W14-3907" ] }, "num": null, "urls": [], "raw_text": "Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Ab- delati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, and Pascale Fung. 2014. Overview for the first shared task on language identification in code- switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "MSR India at SemEval-2020 task 9: Multilingual models can do code-mixing too", "authors": [ { "first": "Anirudh", "middle": [], "last": "Srinivasan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "951--956", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anirudh Srinivasan. 2020. MSR India at SemEval- 2020 task 9: Multilingual models can do code-mixing too. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 951-956, Barcelona (on- line). International Committee for Computational Lin- guistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "A corpus of english-hindi code-mixed tweets for sarcasm detection", "authors": [ { "first": "Sahil", "middle": [], "last": "Swami", "suffix": "" }, { "first": "Ankush", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Vinay", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.11869" ] }, "num": null, "urls": [], "raw_text": "Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar, and Manish Shrivastava. 2018. A corpus of english-hindi code-mixed tweets for sar- casm detection. arXiv preprint arXiv:1805.11869.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Meta-learning for effective multi-task and multilingual modelling", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.10368" ] }, "num": null, "urls": [], "raw_text": "Meta-learning for effective multi-task and multilingual modelling. arXiv preprint arXiv:2101.10368.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "From machine translation to code-switching: Generating high-quality code-switched text", "authors": [ { "first": "Ishan", "middle": [], "last": "Tarunesh", "suffix": "" }, { "first": "Syamantak", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Preethi", "middle": [], "last": "Jyothi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ishan Tarunesh, Syamantak Kumar, and Preethi Jyothi. 2021b. From machine translation to code-switching: Generating high-quality code-switched text. In Pro- ceedings of the 58th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "EN-ES-CS: An English-Spanish code-switching Twitter corpus for multilingual sentiment analysis", "authors": [ { "first": "David", "middle": [], "last": "Vilares", "suffix": "" }, { "first": "Miguel", "middle": [ "A" ], "last": "Alonso", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4149--4153", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Vilares, Miguel A. Alonso, and Carlos G\u00f3mez- Rodr\u00edguez. 2016. EN-ES-CS: An English-Spanish code-switching Twitter corpus for multilingual senti- ment analysis. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Evalua- tion (LREC'16), pages 4149-4153, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Exploring and predicting transferability across NLP tasks", "authors": [ { "first": "Tu", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tsendsuren", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mattarella-Micke", "suffix": "" }, { "first": "Subhransu", "middle": [], "last": "Maji", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7882--7926", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.635" ] }, "num": null, "urls": [], "raw_text": "Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessan- dro Sordoni, Adam Trischler, Andrew Mattarella- Micke, Subhransu Maji, and Mohit Iyyer. 2020. Ex- ploring and predicting transferability across NLP tasks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7882-7926, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Are multilingual models effective in codeswitching?", "authors": [ { "first": "Samuel", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Cahyawijaya", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching", "volume": "", "issue": "", "pages": "142--153", "other_ids": { "DOI": [ "10.18653/v1/2021.calcs-1.20" ] }, "num": null, "urls": [], "raw_text": "Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2021. Are multilingual models effective in code- switching? In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code- Switching, pages 142-153, Online. Association for Computational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Huggingface's transformers: State-of-theart natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the- art natural language processing. ArXiv, pages arXiv- 1910.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguis- tics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Different transliterations for some descriptive words in MA. indic-trans leaves some residual characters in the native script.", "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "text": "HI files EN ko HI desk EN pe HI rakh HI do HI Yeh HI files EN ko HI desk EN pe HI rakh HI do", "type_str": "table", "content": "", "html": null }, "TABREF2": { "num": null, "text": "5,6 . The Romanized version of", "type_str": "table", "content": "
MethodF1ES-EN (X: ES) Prec.Rec.F1HI-EN (X: HI) Prec.Rec.F1TA-EN (X: TA) Prec. Rec.ML-EN (X: ML) F1 Prec. Rec.
Baseline60.9561.9360.4368.1768.756876.0775.3377.6675.4675.7275.64
+EN SING-TASK65.1166.0065.0069.1469.7268.9676.4175.6978.1176.4976.7876.44
mBERT+X SINGLE-TASK +X-EN SINGLE-TASK +MLM +CODE-SWITCHED MLM64.69 66.64 62.02 63.8865.71 67.61 62.93 64.8164.57 66.21 61.29 63.1368.75 69.20 69.89 70.3369.37 69.63 70.58 71.1768.60 69.06 69.76 70.1075.78 76.75 76.73 -74.89 76.11 76.14 -77.80 78.63 78.53 -75.92 77.00 76.13 -75.96 77.16 76.23 -76.15 77.04 76.24 -
+X-EN/MLM MULTI-TASK67.0168.1166.7269.9970.2969.9177.2376.679.1677.4977.5677.58
Baseline66.4567.4565.8669.3769.3869.4675.5374.5677.7574.1474.3574.15
+EN SING-TASK67.8268.8967.4170.2370.7870.0976.0875.4177.6575.1475.2975.42
XLM-R+X SINGLE-TASK +X-EN SINGLE-TASK +MLM +CODE-SWITCHED MLM66.68 68.97 66.37 67.1068.40 69.79 67.42 68.3066.29 68.28 65.69 66.5569.96 70.23 70.92 71.7470.38 70.91 71.94 72.2969.83 70.01 70.66 71.5976.36 76.49 76.95 -75.52 75.90 76.21 -77.88 77.60 78.60 -76.12 76.68 76.28 -76.10 76.80 76.26 -76.24 76.62 76.42 -
+X-EN/MLM MULTI-TASK70.3371.4169.5771.0871.4370.9777.5076.8478.6076.9176.9476.98
Our Best Models (Max)71.772.871.372.673.272.4787779787878
", "html": null }, "TABREF3": { "num": null, "text": "Our main results for sentiment analysis. Best results for each model are underlined and the overall best results are in bold. All scores are weighted averages and are further averaged over five runs with random seeds. The last row gives the F1, precision and recall value for the method with the maximum F1 across all five seeds.", "type_str": "table", "content": "", "html": null }, "TABREF4": { "num": null, "text": ". While prior work required roughly 17M sentences in ES-EN, 2.09M sentences in HI-EN and 60M tweets to train TweetEval for TA and MA, we use 192K, 180K, 330K and 240K sentences for the four respective language pairs. While MLM training (i.e., +MLM inTable 1) consistently improves over the baseline, we observe that code-switched MLM (i.e.,CODE-SWITCHED MLM", "type_str": "table", "content": "
MethodGLUECOS NLI (acc.) Max MeanGLUECOS QA (F1) Max Mean
Baseline61.0757.5166.8964.25
mBERT+MLM +EN SING-TASK +HI SING-TASK +HI-EN SING-TASK59.94 62.40 63.73 65.5558.75 60.73 62.09 64.160.8 77.62 79.63 81.6158.28 75.77 76.77 79.97
+HI-EN/NLI-QA MULTI-TASK66.7465.383.0380.25
+HI-EN/MLM MULTI-TASK66.6665.6181.0579.11
Baseline--56.8653.22
XLM-R+MLM +EN SING-TASK +HI SING-TASK +HI-EN SING-TASK-66.22 63.24 65.01-63.91 61.73 64.3745.9 82.04 81.48 82.4142.34 80.92 80.55 81.36
+HI-EN/NLI-QA MULTI-TASK64.4964.3583.9582.38
+HI-EN/MLM MULTI-TASK66.6665.0182.180.44
Previous work on GLUECOS
mBERT (Khanuja et al., 2020b) \u202059.2857.7463.5862.23
mod-mBERT
", "html": null }, "TABREF5": { "num": null, "text": "Our main results for NLI and QA from intermediate-task training. All scores are averaged over five runs with random seeds. Max and mean accuracies (for NLI) and F1-scores (for QA) over these runs are listed.", "type_str": "table", "content": "", "html": null }, "TABREF6": { "num": null, "text": "4.2 Results on NLI and QA NLI/QA SINGLE TASK Results. Table 2 shows our main results for the NLI and QA tasks in HI-EN. (Code-switched benchmarks in other language pairs are not available for NLI and QA.", "type_str": "table", "content": "
Intermediate-Task ParadigmMaxMeanStd.
GLUECOS NLI (acc.)
EN SING-TASK62.4060.731.78
HI SING-TASK63.7362.090.99
HI-EN SING-TASK65.5564.10.89
Sequential Training: EN \u2192 HI62.0259.941.83
GLUECOS QA (F1)
EN SING-TASK77.6275.771.79
HI SING-TASK79.6376.771.86
HI-EN SING-TASK81.6179.971.29
Sequential Training: EN \u2192 HI76.2373.691.78
", "html": null }, "TABREF7": { "num": null, "text": "Sequential bilingual training with mBERT yields poor performance on both NLI and QA. Scores correspond to five random runs with random seeds. TASK performs the best (based on mean scores) on both NLI 11 and QA. Another interesting observation is that XLM-R benefits more from EN SING-TASK while mBERT benefits more from HI SING-TASK, compared to the baseline. This could be attributed to XLM-R having encountered Romanized HI text during its pretraining unlike mBERT and the GLUECOS corpus contains only Romanized Hindi. Using a merged HI-EN dataset for HI-EN SING-TASK training, with training batches consisting of both HI and EN instances, was critical for improved performance.Table 3shows the difference in per-", "type_str": "table", "content": "
Translate \u2212 TransliterateMaxMeanStd.
GLUECOS NLI (acc.)
Manual \u2212 Google Translate API62.2461.60.62
Manual \u2212 indic-trans62.0959.711.37
Google Translate API (both)60.1858.591.07
GLUECOS QA (F1)
Manual \u2212 Google Translate API79.3277.332.22
Manual \u2212 indic-trans78.0976.351.36
Google Translate API (both)78.4476.721.22
", "html": null }, "TABREF8": { "num": null, "text": "Effect of translation and transliteration quality on intermediate-task training, using HI-EN SING-TASK for NLI and QA. Scores correspond to five random runs with random seeds.", "type_str": "table", "content": "", "html": null }, "TABREF10": { "num": null, "text": "", "type_str": "table", "content": "
: Effect of translation and transliteration quality
on intermediate-task training, using HI SING-TASK for
NLI and QA. Scores correspond to five random runs
with random seeds.
", "html": null }, "TABREF11": { "num": null, "text": ", thus indicating transliterations from the Google translate API would be a better choice as compared to indic-trans.14", "type_str": "table", "content": "
TaskModelTranslit ToolF1Prec.Rec.
TA SINGLE-TASKmBERT XLM-Rindic-trans Bing API indic-trans Bing API75.42 74.72 76.62 75.78 74.89 77.8 75.51 74.87 76.66 76.36 75.52 77.88
ML SINGLE-TASKmBERT XLM-Rindic-trans Bing API indic-trans Bing API74.7 75.92 75.96 76.15 74.82 74.71 74.68 74.82 74.66 76.12 76.1 76.24
", "html": null }, "TABREF12": { "num": null, "text": "", "type_str": "table", "content": "
:Effect of transliteration quality of
intermediate-tasks on SA results. Scores are weighted
averages further averaged over 5 random runs.
", "html": null }, "TABREF13": { "num": null, "text": "Split Ends a Cosmetology Shop is a nice example of appositional elegance combined with euphemism in the appositive and the low key or off-beat opening. HYPOTHESIS: Split Ends is an ice cream shop. PREMISE: split ends ek kosmetolojee shop epositiv aur kam kunjee ya oph-beet opaning mein vyanjana ke saath sanyukt eplaid laality ka ek achchha udaaharan hai.HYPOTHESIS: split ends ek aaisakreem shop hai. PREMISE: split inds ek kosmetolojee shop samaanaadhikaran shishtata aur kam kunjee ya of-beet opaning mein preyokti ke mishran ka ek achchha udaaharan hai. HYPOTHESIS: split ends ek aaisakreem kee dukaan hai. PREMISE: split inds ek cosmetology shop samaanaadhikaran shishtataa or kam kunjee yaa of-beet opening main preyokti ke mishran kaa ek acha udhaaharan he. HYPOTHESIS: split ands ek icecream kii dukaan he.", "type_str": "table", "content": "
LanguagePremise/ HypothesisLabelDataset
ENPREMISE: entailmentMultiNLI/ XNLI
HI (Google )entailmentTranslation \u2020
HI (Google )entailmentXNLI
HI (indic )entailmentXNLI
", "html": null }, "TABREF15": { "num": null, "text": "Performance on different variations of MLM + intermediate-task training of mBERT. We underline the relatively best model and bold-face the model with the highest performance for each task.", "type_str": "table", "content": "", "html": null }, "TABREF16": { "num": null, "text": "Android Tv ko Launch kiya hain. Jise tahat yeh Tv Android Operating System par chalta hain. Iski Keemat Rs. 51,990 rakhi gayi hain. Ab aaya Android TV Mitashi Company ne Android KitKat OS par chalne wale Smart TV ko Launch kar diya hain. Company ne is T.V. ko 51,990 Rupees ke price par launch kiya hain. Agar features ki baat kare to is Android TV ki Screen 50 inch ki hain, Jo 1280 X 1080 p ka resolution deti hain. USB plug play function ke saath yeh T.V. 27 Vidoe formats ko support karta hain. Vidoe input ke liye HDMI Cable, PC, Wi-Fi aur Ethernet Connectivity di gyi hain. Behtar processing ke liye dual core processor ke saath 512 MB ki RAM lagayi gyi hain. Yeh Android TV banane wali company Mitashi isse pahle khilaune banane ka kaam karti thi. Iske alawa is company ne education se jude products banane shuru kiye. 1998 mein stapith huyi is company ne Android T.V. ke saath-saath India ki pahli Android Gaming Device ko bhi launch kiya hain. Polonia Warsaw, have significantly fewer supporters, yet they managed to win Ekstraklasa Championship in 2000. They also won the country's championship in 1946, and won the cup twice as well. Polonia's home venue is located at Konwiktorska Street, a ten-minute walk north from the Old Town. Polonia was relegated from the country's top flight in 2013 because of their disastrous financial situation. They are now playing in the 4th league (5th tier in Poland) -the bottom professional league in the National -Polish Football Association structure. poloniya voraso ke paas kaaphee kam samarthak hain, phir bhee ve 2000 mein ekastraklaasa chaimpiyanaship jeetane mein kaamayaab rahe. unhonne 1946 mein desh kee chaimpiyanaship bhee jeetee, aur do baar kap bhee jeeta. poloniya ka ghareloo sthal konaveektarsaka street par sthit hai, jo old taun se uttar mein das minat kee paidal dooree par hai. apanee vinaashakaaree vitteey sthiti ke kaaran poloniya ko 2013 mein desh kee sheersh udaan se hata diya gaya tha. ab ve neshanal (polish polish esosieshan) sanrachana mein 4 ven leeg (polaind mein 5 ven star) mein khel rahe hain.", "type_str": "table", "content": "
LanguageQA ContextDataset
HI-ENMitashi ne ek GLUECOS QA
ENTheir local rivals, SQuAD/XQuAD
HI (Google )unake sthaaneey pratidvandviyon, Translation \u2020
", "html": null }, "TABREF17": { "num": null, "text": "ENthe story and the friendship proceeds in such a way that you 're watching a soap opera rather than a chronicle of the ups and downs that accompany lifelong friendships .negative SST HI kahani or dosti is tarah se aage badhati hai ki op jeevan bhar ki dosti ke saath aane vale utaar-chadhav k kram k bajay ek dharavahik dekh rahe hain negative Translation \u2020", "type_str": "table", "content": "
LanguageSentenceLabelDataset
ENIt's definitely Christmas season! My social media news feeds have been all about Hatchimals since midnight!positiveTweetEval
Good luck parents!
HIyeah nishchit roop se christmas ka mausam hai! mera social media news feed adhi raat se hatchimal ke baare meinpositiveTranslation \u2020
hai! mata-pita ko shubhkamnayen!
ES\u00a1Es definitivamente la temporada de Navidad! Mis noticias en las redes sociales han sido todo acerca de HatchimalspositiveTranslation \u2021
desde medianoche! \u00a1Buena suerte padres!
MLith theerchayayum chrismas seesonnan, ente social media news feads ardharathri muthal hachimalsine kurichan!positiveTranslation \u2020
TAitu nichchayam christumus column! nalliravu muthal enathu samook utaka seithi oottngal anaithum hatchimalspositiveTranslation \u2020
patriadhu! petrors nalvazthukal!
", "html": null } } } }