{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:00.878382Z" }, "title": "Vy\u0101karana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages", "authors": [ { "first": "Rajaswa", "middle": [], "last": "Patil", "suffix": "", "affiliation": { "laboratory": "Cognitive Neuroscience Lab BITS Pilani", "institution": "", "location": { "settlement": "Goa Campus" } }, "email": "" }, { "first": "Jasleen", "middle": [], "last": "Dhillon", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Siddhant", "middle": [], "last": "Mahurkar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": {} }, "email": "" }, { "first": "Saumitra", "middle": [], "last": "Kulkarni", "suffix": "", "affiliation": {}, "email": "kulkarnisaumitra98@gmail.com" }, { "first": "Manav", "middle": [], "last": "Malhotra", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Veeky", "middle": [], "last": "Baths", "suffix": "", "affiliation": { "laboratory": "Cognitive Neuroscience Lab BITS Pilani", "institution": "", "location": { "settlement": "Goa Campus" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While there has been significant progress towards developing NLU resources for Indic languages, syntactic evaluation has been relatively less explored. Unlike English, Indic languages have rich morphosyntax, grammatical genders, free linear word-order, and highly inflectional morphology. In this paper, we introduce Vy\u0101karana: a benchmark of Colorless Green sentences in Indic languages for syntactic evaluation of multilingual language models. The benchmark comprises four syntax-related tasks: PoS Tagging, Syntax Tree-depth Prediction, Grammatical Case Marking, and Subject-Verb Agreement. We use the datasets from the evaluation tasks to probe five multilingual language models of varying architectures for syntax in Indic languages. Due to its prevalence, we also include a code-switching setting in our experiments. Our results show that the tokenlevel and sentence-level representations from the Indic language models (IndicBERT and MuRIL) do not capture the syntax in Indic languages as efficiently as the other highly multilingual language models. Further, our layerwise probing experiments reveal that while mBERT, DistilmBERT, and XLM-R localize the syntax in middle layers, the Indic language models do not show such syntactic localization.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "While there has been significant progress towards developing NLU resources for Indic languages, syntactic evaluation has been relatively less explored. Unlike English, Indic languages have rich morphosyntax, grammatical genders, free linear word-order, and highly inflectional morphology. In this paper, we introduce Vy\u0101karana: a benchmark of Colorless Green sentences in Indic languages for syntactic evaluation of multilingual language models. The benchmark comprises four syntax-related tasks: PoS Tagging, Syntax Tree-depth Prediction, Grammatical Case Marking, and Subject-Verb Agreement. We use the datasets from the evaluation tasks to probe five multilingual language models of varying architectures for syntax in Indic languages. Due to its prevalence, we also include a code-switching setting in our experiments. Our results show that the tokenlevel and sentence-level representations from the Indic language models (IndicBERT and MuRIL) do not capture the syntax in Indic languages as efficiently as the other highly multilingual language models. Further, our layerwise probing experiments reveal that while mBERT, DistilmBERT, and XLM-R localize the syntax in middle layers, the Indic language models do not show such syntactic localization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Indian subcontinent is home to more than 450 languages spanning seven language families. Multilingualism and code-switching are common phenomenon across these languages. Traditionally, the region has been at the center of many linguistic studies due to its rich linguistic diversity (Emeneau, 1956 ). The Indo-Aryan and the Dravidian language families are the most prominent ones in the subcontinent, with more than a billion speakers combined. Recent work in computational linguistics has focused on clubbing together the major languages from these two language families into a single group called as Indic Languages. 1 Even with such a huge potential user base, the progress in building language technologies for the Indic languages has been limited. Most of the Indic languages fall under the category of low-resourced and mid-resourced languages. Some recent opensourced efforts have tried to address this by developing various toolkits (Arora, 2020) , frameworks (Kunchukuttan, 2020) , language models, benchmarks, and datasets (Kakwani et al., 2020) .", "cite_spans": [ { "start": 287, "end": 301, "text": "(Emeneau, 1956", "ref_id": "BIBREF7" }, { "start": 945, "end": 958, "text": "(Arora, 2020)", "ref_id": null }, { "start": 972, "end": 992, "text": "(Kunchukuttan, 2020)", "ref_id": "BIBREF19" }, { "start": 1037, "end": 1059, "text": "(Kakwani et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To perform well on natural language understanding tasks, a language model should have a good understanding of the various general aspects of the structure of language, like its grammar and syntax. The structure of Indic languages is quite rich in its morphosyntax, which is partially shared across the languages. Thus, performing language modeling and downstream tasks on a multilingual dataset of Indic languages helps capture typologically generalized stimuli across the languages and implicitly addresses the issue of data-scarcity. While there have been some efforts to analyze the role of multilingual training for Indic languages, it has been limited to downstream NLU tasks and large multilingual models which are not exclusively trained for Indic languages (Jain et al., 2020) . In this work, we introduce a syntactic evaluation benchmark of Colorless Green 2 sentences in Indic languages: Vy\u0101karana 3 , which provides a challenging envi-ronment to evaluate multilingual language models for their syntactic abilities specifically. We use the dataset to conduct layer-wise probing of five multilingual language models to inspect their understanding of the syntax in Indic languages. We include the IndicBERT (Kakwani et al., 2020) and MuRIL (Khanuja et al., 2021) 4 models in our experiments, which are trained on Indic languages and English exclusively. We probe the models for four syntax-related tasks: PoS Tagging (POS), Syntax Tree-depth Prediction (STDP), Grammatical Case Marking (GCM), and Subject-Verb Agreement (SVA). In an attempt to capture the linguistic diversity in the Indian subcontinent, we include one language each from the Indo-Aryan (Hindi) and Dravidian (Tamil) language families in the dataset. In order to incorporate the prevalent phenomena of code-switching in these languages, we also include English-based script-mixed versions of these languages in our experiments. Our layer-wise probing experiments show that the Indic language models do not show syntactic localization and do not capture the syntax in Indic languages as efficiently as the other \"highly multilingual language models\". 5", "cite_spans": [ { "start": 765, "end": 784, "text": "(Jain et al., 2020)", "ref_id": "BIBREF15" }, { "start": 1215, "end": 1237, "text": "(Kakwani et al., 2020)", "ref_id": "BIBREF17" }, { "start": 1248, "end": 1270, "text": "(Khanuja et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language models are usually evaluated with information theory-based perplexity measures. While these perplexity measures might show good agreement with a language model's natural language understanding (NLU) capabilities, they do not capture language models' syntactic abilities efficiently (Tran et al., 2018) . Evaluating language models' syntactic abilities is quite important in furthering the research towards developing human-like robust language models (Linzen, 2020) . Recent work has focused on the targeted syntactic evaluation of language models (Linzen et al., 2016; Lau et al., 2017; Gulordava et al., 2018; Marvin and Linzen, 2018; McCoy et al., 2019; Futrell et al., 2019) , which takes inspiration from various psycholinguistic generalizations found in humans and assesses the role of syntax in the models' ability to perform various NLU tasks. While there has been significant progress towards building NLU evaluation benchmarks (Wang et al., 2018 (Wang et al., , 2019 , the work in developing syntactic evaluation benchmarks has been quite recent and limited (Gauthier et al., 2020; Hu et al., 2020a; Mueller et al., 2020) .", "cite_spans": [ { "start": 291, "end": 310, "text": "(Tran et al., 2018)", "ref_id": "BIBREF41" }, { "start": 460, "end": 474, "text": "(Linzen, 2020)", "ref_id": "BIBREF26" }, { "start": 557, "end": 578, "text": "(Linzen et al., 2016;", "ref_id": "BIBREF27" }, { "start": 579, "end": 596, "text": "Lau et al., 2017;", "ref_id": null }, { "start": 597, "end": 620, "text": "Gulordava et al., 2018;", "ref_id": "BIBREF11" }, { "start": 621, "end": 645, "text": "Marvin and Linzen, 2018;", "ref_id": "BIBREF29" }, { "start": 646, "end": 665, "text": "McCoy et al., 2019;", "ref_id": "BIBREF30" }, { "start": 666, "end": 687, "text": "Futrell et al., 2019)", "ref_id": "BIBREF8" }, { "start": 946, "end": 964, "text": "(Wang et al., 2018", "ref_id": "BIBREF44" }, { "start": 965, "end": 985, "text": "(Wang et al., , 2019", "ref_id": "BIBREF43" }, { "start": 1077, "end": 1100, "text": "(Gauthier et al., 2020;", "ref_id": "BIBREF9" }, { "start": 1101, "end": 1118, "text": "Hu et al., 2020a;", "ref_id": "BIBREF13" }, { "start": 1119, "end": 1140, "text": "Mueller et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Probing is an alternative paradigm that can be used for syntactic evaluation of language models. Probing deals with quantifying the amount of linguistic information encoded in the pre-trained representations of language models (Adi et al., 2017; Conneau et al., 2018) . While this does not provide a very detailed analysis of models' syntactic behavior, it can be used to compare the amount of syntactic information captured by the models, as well as the inner dynamics of how and where this information is encoded in the models (Hewitt and Manning, 2019; Jawahar et al., 2019; Liu et al., 2019; Tenney et al., 2019b; Rogers et al., 2020) . Probing methods are also relatively compute-efficient, as they do not involve training or fine-tuning the language models.", "cite_spans": [ { "start": 227, "end": 245, "text": "(Adi et al., 2017;", "ref_id": "BIBREF0" }, { "start": 246, "end": 267, "text": "Conneau et al., 2018)", "ref_id": "BIBREF5" }, { "start": 529, "end": 555, "text": "(Hewitt and Manning, 2019;", "ref_id": "BIBREF12" }, { "start": 556, "end": 577, "text": "Jawahar et al., 2019;", "ref_id": "BIBREF16" }, { "start": 578, "end": 595, "text": "Liu et al., 2019;", "ref_id": "BIBREF28" }, { "start": 596, "end": 617, "text": "Tenney et al., 2019b;", "ref_id": "BIBREF40" }, { "start": 618, "end": 638, "text": "Rogers et al., 2020)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "With the recent development in multilingual language modeling, numerous evaluation studies have been performed to test the models' multilingual and cross-lingual abilities (Libovick\u00fd et al., 2019; R\u00f6nnqvist et al., 2019; Pires et al., 2019; Wu and Dredze, 2019; Artetxe et al., 2020) . Similar to monolingual language modeling paradigms, there has been significant work towards building multilingual evaluation benchmarks (Hu et al., 2020b; Kakwani et al., 2020; Liang et al., 2020) . However, these benchmarks do not cover syntactic evaluation efficiently, with POS tagging being the sole syntax-related task. Some recent studies have tried to address this by building multilingual and crosslingual syntactic evaluation suites with subject-verb agreement tasks (Gulordava et al., 2018; Mueller et al., 2020) .", "cite_spans": [ { "start": 172, "end": 196, "text": "(Libovick\u00fd et al., 2019;", "ref_id": "BIBREF24" }, { "start": 197, "end": 220, "text": "R\u00f6nnqvist et al., 2019;", "ref_id": "BIBREF37" }, { "start": 221, "end": 240, "text": "Pires et al., 2019;", "ref_id": "BIBREF34" }, { "start": 241, "end": 261, "text": "Wu and Dredze, 2019;", "ref_id": "BIBREF46" }, { "start": 262, "end": 283, "text": "Artetxe et al., 2020)", "ref_id": "BIBREF2" }, { "start": 422, "end": 440, "text": "(Hu et al., 2020b;", "ref_id": "BIBREF14" }, { "start": 441, "end": 462, "text": "Kakwani et al., 2020;", "ref_id": "BIBREF17" }, { "start": 463, "end": 482, "text": "Liang et al., 2020)", "ref_id": "BIBREF23" }, { "start": 762, "end": 786, "text": "(Gulordava et al., 2018;", "ref_id": "BIBREF11" }, { "start": 787, "end": 808, "text": "Mueller et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Alongside the developments in multilingual NLP, there has been some recent progress towards advancing Indic NLP (Arora, 2020; Jain et al., 2020; Kakwani et al., 2020; Kunchukuttan, 2020) . While there are numerous datasets available to benchmark and compare multilingual language models on NLU tasks in Indic languages, there is no such resource available for syntactic evaluation in Indic languages. 6 Most recent works (Jain et al., 2020; Kakwani et al., 2020; Khanuja et al., 2021) rely on POS tagging with Universal Dependencies treebanks as the sole task to compare the syntactic abilities of Indic language models.", "cite_spans": [ { "start": 112, "end": 125, "text": "(Arora, 2020;", "ref_id": null }, { "start": 126, "end": 144, "text": "Jain et al., 2020;", "ref_id": "BIBREF15" }, { "start": 145, "end": 166, "text": "Kakwani et al., 2020;", "ref_id": "BIBREF17" }, { "start": 167, "end": 186, "text": "Kunchukuttan, 2020)", "ref_id": "BIBREF19" }, { "start": 401, "end": 402, "text": "6", "ref_id": null }, { "start": 421, "end": 440, "text": "(Jain et al., 2020;", "ref_id": "BIBREF15" }, { "start": 441, "end": 462, "text": "Kakwani et al., 2020;", "ref_id": "BIBREF17" }, { "start": 463, "end": 484, "text": "Khanuja et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Neha eats mangoes and Tom eats apples. 3 Vy\u0101karana: A Colorless Green Benchmark 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "\u0928 \u0947 \u0939 \u093e \u0906 \u092e \u0916 \u093e \u0924 \u0940 \u0939 \u0914 \u0930 \u091f \u0949 \u092e \u0938 \u0947 \u092c \u0916 \u093e \u0924 \u093e \u0939 \u0964 a. b. [ NNP ] [ NNS ] [ VB ] \u0939 \u0914 \u0930 [ NNP ] [ NNS ] [ VB ] \u0939 \u0964 c. \u0930 \u092f \u093e \u092a \u0924 \u0902 \u0917 \u092a \u0940 \u0924 \u0940 \u0939 \u0914 \u0930 \u091c \u0949 \u0928 \u092a \u0930 \u0916 \u091a \u0924 \u093e \u0939 \u0964 d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Syntax in Indic languages has specific characteristics which make it more challenging to model. Unlike English, most of the Indic languages follow a free linear word-order, with the default order being SOV (subject-object-verb). Hence, the linear distance (in terms of the number of intervening words) between the verb and its subject is usually larger in Indic languages as compared to English (Figure 1a ; Figure 1b ). Indic languages like Hindi and Tamil also have grammatical genders and grammatical number, where the words are morphologically inflected in agreement with the gender and count of their corresponding head nouns (Figure 1b) . Usually, the syntactic evaluation of language models is conducted with syntax-related probing tasks (Jawahar et al., 2019) , or with targeted syntactic evaluation through controlled psycholinguistic experiments (Gauthier et al., 2020) . Performing such syntactic evaluations under a colorless green setting ensures that the models do not leverage any semantic or lexical cues and biases from the text 7 Example sentences from the benchmark dataset can be found in Appendix A to process its syntactic structure (Gulordava et al., 2018; Goldberg, 2019) . Most of the evaluation tasks from both of these categories can be designed with a syntactic dependencies treebank. Hence, we construct a colorless green treebank for Indic languages, which can be used to perform various syntactic evaluation tasks under a colorless green setting.", "cite_spans": [ { "start": 745, "end": 767, "text": "(Jawahar et al., 2019)", "ref_id": "BIBREF16" }, { "start": 856, "end": 879, "text": "(Gauthier et al., 2020)", "ref_id": "BIBREF9" }, { "start": 1046, "end": 1047, "text": "7", "ref_id": null }, { "start": 1155, "end": 1179, "text": "(Gulordava et al., 2018;", "ref_id": "BIBREF11" }, { "start": 1180, "end": 1195, "text": "Goldberg, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 395, "end": 405, "text": "(Figure 1a", "ref_id": "FIGREF0" }, { "start": 408, "end": 417, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 631, "end": 642, "text": "(Figure 1b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We use the Hindi Universal Dependency Treebank (HDTB) (Bhat et al., 2008; Palmer et al., 2009) to construct a new Colorless Green treebank for Hindi: CG-HDTB. 9 We follow the method proposed by Gulordava et al. (2018) to generate colorless green sentences. Given a sentence from the original HDTB treebank, we first convert it to a template colorless green sentence ( Figure 1c ) by removing all the content words (while retaining all the function words) from it. Each of the resulting gaps created by the removal of content words in the template sentence is then populated with a content word from another sentence in the treebank, where the grammatical information of the original and replacing content word is the same (Figure 1d ). The resulting colorless green sentence still retains the same grammatical information and syntactic structure as the original sentence, whereas the random substitution of the content words ensures that it is rendered nonsensical (Figure 1a ; Figure 1e ).", "cite_spans": [ { "start": 54, "end": 73, "text": "(Bhat et al., 2008;", "ref_id": "BIBREF3" }, { "start": 74, "end": 94, "text": "Palmer et al., 2009)", "ref_id": "BIBREF32" }, { "start": 159, "end": 160, "text": "9", "ref_id": null }, { "start": 194, "end": 217, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 368, "end": 377, "text": "Figure 1c", "ref_id": "FIGREF0" }, { "start": 722, "end": 732, "text": "(Figure 1d", "ref_id": "FIGREF0" }, { "start": 965, "end": 975, "text": "(Figure 1a", "ref_id": "FIGREF0" }, { "start": 978, "end": 987, "text": "Figure 1e", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "CG-HDTB: A Colorless Green Treebank for Hindi 8", "sec_num": "3.1" }, { "text": "Noun Verb Adjective Adverb Table 1 : The grammatical information taken into account for various PoS categories of content words (Vikram, 2013) .", "cite_spans": [ { "start": 128, "end": 142, "text": "(Vikram, 2013)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "PoS Tag Gen Num Case Per", "sec_num": null }, { "text": "The grammatical information taken into consideration while substituting the content words includes the Grammatical Gender (Gen) (masculine/feminine), Grammatical Number (Num) (singular/plural), Grammatical Case (Case) (Sec- Table 2 : The number of sentences and the statistics for token-level grammatical gender feature in the HDTB and CG-HDTB treebank (the statistics are same for CG-HDTB and csCG-HDTB treebanks). tion 3.2), and Person (Per) (first/second/third). This makes sure that the replacing content word is consistent with syntax and the morphological inflections of the original content word, keeping the morphosyntax of the sentence intact. The grammatical information required for every PoS category is different, as shown in Table 1 . The HDTB treebank has an imbalanced count of Grammatical Gender features. Following an uncontrolled and random substitution of content words can result in a gender-imbalanced dataset of colorless green sentences. We eliminate this imbalance by controlling the grammatical gender information in the colorless green sentences. For every sentence in the HDTB treebank, we generate four colorless green sentences, where the gender encoding of the sentences is: 1. Exactly same as that of the original sentence. 2. Exactly opposite as that of the original sentence. 3. Entirely masculine.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 2", "ref_id": null }, { "start": 739, "end": 746, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "PoS Tag Gen Num Case Per", "sec_num": null }, { "text": "Apart from the PoS categories of content words mentioned in Table 1 , we also adjust the genderinflected Adposition tokens in the colorless green sentences in order to maintain the morphosyntax of the sentence. The statistics for the Grammatical Gender feature in the gender-imbalanced HDTB treebank and the gender-balanced CG-HDTB treebank are shown in Table 2 . Further, we swap the train and test set of the HDTB treebank while generating colorless green sentences (Table 2) . This allows the test set of CG-HDTB to be significantly bigger than the training set, providing a challenging test setting for the models' syntactic generalization capabilities.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 1", "ref_id": null }, { "start": 354, "end": 361, "text": "Table 2", "ref_id": null }, { "start": 468, "end": 477, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Entirely feminine.", "sec_num": "4." }, { "text": "Unlike Hindi, there is no publicly available treebank for Hindi-English code-switched setting with all the features that are required to obtain the colorless green sentences. Hence, we artificially create a code-switched treebank for Hindi-English: csCG-HDTB. The csCG-HDTB treebank is a parallel code-switched version of the CG-HDTB treebank. Following Khanuja et al. (2021) 's approach, we use the Indic-Trans library 10 to transliterate the tokens (Hindi-Devanagari script \u2192 English-Latin script) from each sentence in the CG-HDTB treebank. 11 While this only helps in incorporating the scriptmixing aspect of code-switching, the approach can be scaled easily across different datasets and languages, eliminating the need of explicit annotations of morphosyntactic features on code-switched data.", "cite_spans": [ { "start": 354, "end": 375, "text": "Khanuja et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Entirely feminine.", "sec_num": "4." }, { "text": "The benchmark comprises four syntactic evaluation tasks. We borrow certain design principles for the benchmark from the XTREME benchmark Hu et al. (2020b):1. Task Difficulty: The colorless-green setting, and a sufficiently large test-set ensure a certain level of difficulty across all the tasks. Further, some tasks require capturing long-range relationships and morphological inflections, which make them more challenging. 2. Task Diversity: We include both the tokenlevel and the sentence-level evaluation tasks in the benchmark. This ensures that the benchmark evaluates representations from both the granularities. 3. Data and Training Efficiency: Under the colorless-green setting, each task has a large test-train ratio in terms of data samples ( Table 3 ). Hence, the datasets for the evaluation tasks are quite challenging in terms of data and training efficiency. We use the HDTB, CG-HDTB, and csCG-HDTB treebanks to construct datasets for all the evaluation tasks. 12 The statistics for the con- structed datasets are given in Table 3 .", "cite_spans": [ { "start": 976, "end": 978, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 754, "end": 761, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1038, "end": 1045, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "3.2" }, { "text": "PoS Tagging (POS): While PoS tagging is a very weak construct for a syntactic evaluation task, it is a preliminary step in syntactic processing. It is also the only existing syntax-related evaluation task used to compare currently available Indic language models. Hence, we include PoS tagging as the first evaluation task in the benchmark. The task is designed under a token-level multi-class singlelabel classification setting. We use the UPOS tags from the treebanks as the ground-truth labels for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "3.2" }, { "text": "Syntax Tree-depth Prediction (STDP): Following the work done by Conneau et al. (2018) and Jawahar et al. (2019) , we use the dependency trees from the treebanks to perform syntax tree-depth prediction task. Successful prediction of the depth of a dependency tree depicts the model's ability to get a surface-level estimate of a given sentence's syntactic structure. The task is designed under a sentence-level multi-class single-label classification setting.", "cite_spans": [ { "start": 64, "end": 85, "text": "Conneau et al. (2018)", "ref_id": "BIBREF5" }, { "start": 90, "end": 111, "text": "Jawahar et al. (2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "3.2" }, { "text": "Grammatical Case Marking (GCM): Given the nature of morphosyntax in Indic languages, case marking is an essential syntactic evaluation task. Unlike languages like English, a fixed linear word-order cannot be used to perform case marking in Indic languages. Instead, one must rely on morphological inflections and Adposition tokens in the sentence to assign appropriate grammatical cases. Moreover, a single token can be marked with a combination of multiple cases. In the HDTB treebank, the grammatical case feature has the following seven unique values: accusative, nominative, accusative-inessive, dative-accusative, ergative-accusative, genitiveaccusative, and instrumental-accusative. Hence, this task is designed under a token-level multi-class single-label classification setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "3.2" }, { "text": "Subject-Verb Agreement (SVA): We include subject-verb agreement as an evaluation task to study the long-range syntactic dependencies in Indic languages. Unlike English, the subject-verb agreement in Indic languages is dependent on the grammatical number as well as the grammatical gender of the tokens. Following the work done by Linzen et al. (2016) , we include all the tokens preceding (and excluding) the target verb in the sentence. Given such a sentence, the task is to predict the target verb's grammatical count and grammatical gender in agreement with its subject (head noun). Given that most of the Indic languages follow a free linear word-order and the default order is SOV (subject-object-verb), the number of intervening nouns is significantly higher than in English. Moreover, there is a high probability of the object acting as an attractor noun (Figure 1) . Hence, the SVA task is significantly more challenging in Indic languages. The task is designed under a sentence-level multi-class single-label classification setting, given the four possible ground truth labels: masculine-singular, masculine-plural, feminine-singular, and feminine-plural. ", "cite_spans": [ { "start": 330, "end": 350, "text": "Linzen et al. (2016)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 862, "end": 872, "text": "(Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "3.2" }, { "text": "We use the datasets constructed for the evaluation tasks to perform layer-wise probing 14 of tokenlevel and sentence-level representations from the five transformer-based multilingual language models: mBERT (Devlin et al., 2019) , XLM-R (Conneau et al., 2020), DistilmBERT (Sanh et al., 2019) , IndicBERT (Kakwani et al., 2020) , and MuRIL (Khanuja et al., 2021) . We use a single linear-layer (initialized with the same weights across all the ex- periments) as the probing classifier for all the tasks and models. Since POS and GCM are token-level tasks, we use the first-subword token-embeddings as the input to the probing classifier. Whereas we use the special sentence-token embeddings for the sentence-level STDP and SVA tasks. We use Hugging Face's transformers library (Wolf et al., 2020) to access the pre-trained instances of these language models. The probing classifier is trained using the PyTorch library (Paszke et al., 2019) . Given that all the evaluation tasks are designed under a multi-class single-label classification setting, we monitor the weighted-F1 scores to evaluate the models (Table 4) .", "cite_spans": [ { "start": 207, "end": 228, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 273, "end": 292, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF38" }, { "start": 305, "end": 327, "text": "(Kakwani et al., 2020)", "ref_id": "BIBREF17" }, { "start": 340, "end": 362, "text": "(Khanuja et al., 2021)", "ref_id": null }, { "start": 777, "end": 796, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF45" }, { "start": 919, "end": 940, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 1106, "end": 1115, "text": "(Table 4)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Overall, due to the lack of semantic and lexical cues, and a larger test-set, all the models find the CG-HDTB data more challenging than the HDTB data (Table 4) . The performance across all the models and tasks further deteriorates with the codeswitched csCG-HDTB data. Given the considerably larger scale (model and pre-training data) of the XLM-R model, for the token-level tasks (POS and GCM), it significantly outperforms the other models for all the three treebanks. Whereas, for the sentence-level tasks (STDP and SVA), mBERT and DistilmBERT perform better than the rest of the models for HDTB and CG-HDTB data. This might be explained by the sentence-level pre-training tasks (Next Sentence Prediction) used by mBERT and DistilmBERT. Overall, for Hindi data, mBERT shows the best average weighted-F1 scores across the four tasks, and the Indic models perform the worst. On the other hand, XLM-R significantly outperforms the rest of the models under a codeswitched Hindi-English setting (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 160, "text": "(Table 4)", "ref_id": "TABREF6" }, { "start": 994, "end": 1003, "text": "(Table 4)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Probing Results", "sec_num": "4.2" }, { "text": "Even though the IndicBERT and MuRIL models outperform mBERT and XLM-R on many downstream NLU tasks (Kakwani et al., 2020; Khanuja et al., 2021) , they consistently fail to catch-up in syntactic evaluation. Even DistilmBERT, a relatively smaller model, outperforms IndicBERT and MuRIL across all the tasks. There might be multiple plausible reasons behind this finding. Both IndicBERT, and MuRIL perform masked word/token language modeling and do not have a sentencelevel pre-training task. Hence, while they are outperformed by mBERT and DistilmBERT, they outperform XLM-R on most of the sentence-level tasks with Hindi data. Moreover, they are significantly smaller in architecture size (IndicBERT) and dataset size as compared to the other models. mBERT, XLM-R, and DistilmBERT are highly multilingual language models, pre-trained on more than 100 languages. This might provide them with the linguistic and typological generalization required for modeling morphosyntax more efficiently than Indic models, which are only trained on a handful of Indic languages and English. Even though the MuRIL model is pre-trained with artificially generated parallel translated and transliterated sentences in Indic languages and their English counterparts, it is outperformed by XLM-R on the csCG-HDTB Hindi-English code-switched data. The XLM-R model has a small amount of natural code-switched data in its pre-training corpus, which might be the reason behind its dominance under the code-switching setting.", "cite_spans": [ { "start": 99, "end": 121, "text": "(Kakwani et al., 2020;", "ref_id": "BIBREF17" }, { "start": 122, "end": 143, "text": "Khanuja et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Probing Results", "sec_num": "4.2" }, { "text": "It has been previously shown that linguistic knowledge is localized in the monolingual BERT (Rogers et al., 2020) . While surface-level linguistic information like linear-word order and sentence-length is captured in the lower layers of the model (Jawahar et al., 2019; , the syntactic knowledge is found in the middle-layers of the model (Hewitt and Manning, 2019; Jawahar et al., 2019) . While basic syntactic operations are encoded in relatively lower-layers (Jawahar et al., 2019; Tenney et al., 2019a) , more complex tasks with long-range syntactic dependencies are performed best by relatively higher layers (Goldberg, 2019) . Rogers et al. (2020) explain this localization of syntactic knowledge using Liu et al. (2019) 's finding that the middle layers of BERT are most generalizing across tasks and the deeper layers are more task-specific. We investigate the localization of syntactic knowledge in multilingual language models by plotting the layer-wise performance across the various evaluation tasks with CG-HDTB data (Figure 2) .", "cite_spans": [ { "start": 92, "end": 113, "text": "(Rogers et al., 2020)", "ref_id": "BIBREF36" }, { "start": 247, "end": 269, "text": "(Jawahar et al., 2019;", "ref_id": "BIBREF16" }, { "start": 339, "end": 365, "text": "(Hewitt and Manning, 2019;", "ref_id": "BIBREF12" }, { "start": 366, "end": 387, "text": "Jawahar et al., 2019)", "ref_id": "BIBREF16" }, { "start": 462, "end": 484, "text": "(Jawahar et al., 2019;", "ref_id": "BIBREF16" }, { "start": 485, "end": 506, "text": "Tenney et al., 2019a)", "ref_id": "BIBREF39" }, { "start": 614, "end": 630, "text": "(Goldberg, 2019)", "ref_id": "BIBREF10" }, { "start": 633, "end": 653, "text": "Rogers et al. (2020)", "ref_id": "BIBREF36" }, { "start": 709, "end": 726, "text": "Liu et al. (2019)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 1030, "end": 1040, "text": "(Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Localization of Syntactic Knowledge", "sec_num": "4.3" }, { "text": "For token-level tasks (POS and GCM) (Figure 2a 2c ), we find a pattern similar to that of monolingual BERT in mBERT, DistilmBERT, and XLM-R, where the weighted-F1 scores peak at the middle layers. The same, however, is not true for IndicBERT and MuRIL, which show declining weighted-F1 scores with increasing depth in the model. This suggests that with increasing depth, the layers become more task-specific. This is an unusual behavior for the Indic models, where investigating the cause for this should be an interesting research direction. The XLM-R model consistently outperforms the rest of the models across all the layers. Whereas the Indic language models consistently fall behind the other highly multilingual models. Between the Indic models, IndicBERT significantly outperforms MuRIL across all the layers.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 49, "text": "(Figure 2a 2c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Localization of Syntactic Knowledge", "sec_num": "4.3" }, { "text": "For sentence-level tasks (STDP and SVA) (Figure 2b 2d) , XLM-R, IndicBERT and MuRIL show no clear localization pattern 15 . mBERT and Dis-tilmBERT show a consistent performance across all the layers, outperforming the rest of the models for both tasks. Similar to Jawahar et al. (2019)'s findings of the STDP task with monolingual BERT, they peak at the middle layers. However, the difference between the layers' performance is not very 15 (with the exception of IndicBERT on task STDP) significant (Figure 2b) . IndicBERT performs particularly well on the STDP task, showing comparable metrics with mBERT and DistilmBERT. While MuRIL starts lowest for STDP task, it eventually manages to outperform XLM-R, but still consistently lags behind IndicBERT (Figure 2b) . Whereas, for SVA task, MuRIL performs better than IndicBERT for most of the layers (Figure 2d) . XLM-R model shows a dip in performance in the middle layers for both the tasks, which is exactly opposite to that of the token-level tasks and previous findings with monolingual BERT.", "cite_spans": [ { "start": 437, "end": 439, "text": "15", "ref_id": null } ], "ref_spans": [ { "start": 40, "end": 54, "text": "(Figure 2b 2d)", "ref_id": "FIGREF1" }, { "start": 499, "end": 510, "text": "(Figure 2b)", "ref_id": "FIGREF1" }, { "start": 752, "end": 763, "text": "(Figure 2b)", "ref_id": "FIGREF1" }, { "start": 849, "end": 860, "text": "(Figure 2d)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Localization of Syntactic Knowledge", "sec_num": "4.3" }, { "text": "This work presents a gender-balanced benchmark evaluation of Colorless Green sentences in Indic languages for syntactic testing of multilingual language models. By doing so, we aim to address the existing research gap in syntactic testing in Indic NLP. We introduce four new treebanks: CG-HDTB, csCG-HDTB, CG-TTB, and csCG-TTB for this purpose. We use the datasets from the treebanks to perform four syntax-related benchmarking evaluation tasks. In our experiments, we perform layer-wise probing of token-level and sentencelevel representations from five different multilingual language models. Our experiments reveal that performing syntactic evaluation under a colorless green setting ensures that the semantic and lexical cues do not add evaluation artifacts in the form of higher-than-actual performance metrics. Further, the experiments also reveal that the multilingual language models suffer significantly under a codeswitched setting, which is an important aspect in Indic languages. Overall, our experiments show that the currently available Indic language models do not capture syntax as efficiently as the other highly multilingual language models. Indic models seem to outperform other multilingual language models on various NLU tasks in Indic languages even with a certain lack of syntactic knowledge. This might suggest that the Indic language models do not rely significantly on syntax while making inferences on such NLU tasks. We also observe that training on large artificial corpora of transliterated texts do not help Indic language models in capturing syntax under a code-switched setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Further, we find that unlike the other highly multilingual language models, the currently available Indic language models do not show any syntactic localization in the middle layers. This is a unique behavior to the multilingual Indic language models, the cause and effects for which can be investigated in another independent study. While the current work only covers the Hindi and Tamil languages, it lays down the framework for performing syntactic evaluation in other Indic languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "As a continuation of the current work, we aim to cover more Indic languages and their codeswitched counterparts in the benchmark. We aim to cover typologically different languages from both the Indo-Aryan, and the Dravidian language families, with a special focus on low-resourced Indic languages. This can be an extremely challenging task. Developing datasets for syntactic evaluation requires a certain level of linguistic expertise, which is usually found in the trained and native speakers of that particular language. While the methods used in this work relax these requirements upto some extent, they are dependent on existing syntactic dependency annotations. Such fine-grained annotations are not readily available for a majority of the Indic languages. Hence, we plan to open-source the benchmark datasets. We aim to continue developing the benchmark with open-source contributions from the trained and native speakers of various Indic languages. The treebank datasets used for this benchmark are publicly available for benchmarking and development purposes on GitHub and the Hugging Face Datasets platform (Lhoest et al., 2021 Table 5 : The weighted-F1 scores for the last layer and the best layer (layer-number mentioned in parenthesis) for the layer-wise probing experiments with the Tamil language data.", "cite_spans": [ { "start": 1116, "end": 1136, "text": "(Lhoest et al., 2021", "ref_id": null } ], "ref_spans": [ { "start": 1137, "end": 1144, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Development and Accessibility", "sec_num": "6" }, { "text": "Due to limitations from the original TTB Tamil Universal Dependencies treebank, we do not cover certain grammatical-gender-based methods and experiments with Tamil language in the current work. 17 Here, we report all the other results for the CG-TTB and csCG-TTB treebank datasets.", "cite_spans": [ { "start": 194, "end": 196, "text": "17", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B Experiments with Tamil Language", "sec_num": null }, { "text": "While comparing the performance between the models, all the observations and inferences discussed in Section 4.2 also hold true for the Tamil language data as seen in Table 5 . Overall, we observe a relatively lower performance in the CG-TTB and csCG-TTB datasets (Table 5) as compared to CG-HDTB and csCG-HDTB datasets respectively (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 5", "ref_id": null }, { "start": 264, "end": 273, "text": "(Table 5)", "ref_id": null }, { "start": 333, "end": 342, "text": "(Table 4)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "B.1 Probing Results", "sec_num": null }, { "text": "All the observations and inferences discussed in Section 4.3 also hold true for the Tamil language data as seen in Figure 5 , Figure 6 , and Figure 7 . Due to a relatively lower performance on Tamil language data, all the figures are shifted downwards with almost similar trends and structures.", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 123, "text": "Figure 5", "ref_id": null }, { "start": 126, "end": 134, "text": "Figure 6", "ref_id": null }, { "start": 141, "end": 149, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "B.2 Localization of Syntactic Knowledge", "sec_num": null }, { "text": "17 This includes the controlled generation of colorless green sentences with gender-balancing, and the Subject-Verb Agreement (SVA) task. Figure 7 : Weighted F1 scores for the layer-wise probing experiments with the CG-TTB test set for the GCM task.", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 146, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "B.2 Localization of Syntactic Knowledge", "sec_num": null }, { "text": "From a linguistic perspective, the term Indic is used for the Indo-Aryan language family. However, in this work, we use the term to represent all the languages used in the subcontinent (following the recent work in \"Indic NLP\").2 Colorless green sentences are grammatically correct, but semantically nonsensical.3 Vy\u0101karana is the Sanskrit term for grammar", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Collectively called as \"Indic language models\" here on 5 mBERT, DistilmBERT, and XLM-R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To the best of our knowledge", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In this section we use Hindi as a demonstrative language, where a similar approach is used for Tamil language.9 Similarly, we use the Tamil Universal Dependency Treebank (TTB)(Ramasamy and \u017dabokrtsk\u00fd, 2012) to construct a new Colorless Green treebank for Tamil: CG-TTB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/libindic/ indic-trans11 Similarly, we create a code-switched treebank for Tamil-English: csCG-TTB.12 Similarly, we use the CG-TTB and csCG-TTB treebanks to conduct the experiments with the Tamil language (Appendix B).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In this section we discuss the experiments with the Hindi language data. Corresponding details about the experiments with the Tamil language data can be found in Appendix B.14 We do not fine-tune the models due to computational limitations. Instead, we report the probing metrics of the last layer and the best-performing layer of every model(Table 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/rajaswa/ indic-syntax-evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Greg Durrett for his feedback on an early draft of this work. We would also like to thank K. A. Geetha for contributing to our discussions around the work for the Tamil language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary pre- diction tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. OpenReview.net.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "2020. iNLTK: Natural language toolkit for indic languages", "authors": [ { "first": "Gaurav", "middle": [], "last": "Arora", "suffix": "" } ], "year": null, "venue": "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/2020.nlposs-1.10" ] }, "num": null, "urls": [], "raw_text": "Gaurav Arora. 2020. iNLTK: Natural language toolkit for indic languages. In Proceedings of Second Work- shop for NLP Open Source Software (NLP-OSS), pages 66-71, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4623--4637", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.421" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4623-4637, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The hindi/urdu treebank project", "authors": [ { "first": "Ahmad", "middle": [], "last": "Riyaz", "suffix": "" }, { "first": "Rajesh", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Annahita", "middle": [], "last": "Bhatt", "suffix": "" }, { "first": "Prescott", "middle": [], "last": "Farudi", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Klassen", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Dipti", "middle": [ "Misra" ], "last": "Rambow", "suffix": "" }, { "first": "Ashwini", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "", "middle": [], "last": "Vaidya", "suffix": "" }, { "first": "", "middle": [], "last": "Sri Ramagurumurthy", "suffix": "" }, { "first": "", "middle": [], "last": "Vishnu", "suffix": "" } ], "year": 2008, "venue": "Handbook of Linguistic Annotation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riyaz Ahmad Bhat, Rajesh Bhatt, Annahita Farudi, Prescott Klassen, Bhuvana Narasimhan, Martha Palmer, Owen Rambow, Dipti Misra Sharma, Ash- wini Vaidya, Sri Ramagurumurthy Vishnu, et al. 2008. The hindi/urdu treebank project. In Hand- book of Linguistic Annotation. Springer Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": { "DOI": [ "10.18653/v1/P18-1198" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "India as a lingustic area", "authors": [ { "first": "M", "middle": [ "B" ], "last": "Emeneau", "suffix": "" } ], "year": 1956, "venue": "Language", "volume": "32", "issue": "1", "pages": "3--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. B. Emeneau. 1956. India as a lingustic area. Lan- guage, 32(1):3-16.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Neural language models as psycholinguistic subjects: Representations of syntactic state", "authors": [ { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "32--42", "other_ids": { "DOI": [ "10.18653/v1/N19-1004" ] }, "num": null, "urls": [], "raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic sub- jects: Representations of syntactic state. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32-42, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "SyntaxGym: An online platform for targeted evaluation of language models", "authors": [ { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "70--76", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.10" ] }, "num": null, "urls": [], "raw_text": "Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, and Roger Levy. 2020. SyntaxGym: An online platform for targeted evaluation of language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 70-76, Online. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Assessing bert's syntactic abilities", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2019. Assessing bert's syntactic abili- ties. CoRR, abs/1901.05287.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1195--1205", "other_ids": { "DOI": [ "10.18653/v1/N18-1108" ] }, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A systematic assessment of syntactic generalization in neural language models", "authors": [ { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1725--1744", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.158" ] }, "num": null, "urls": [], "raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020a. A systematic assessment of syntactic generalization in neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725-1744, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "119", "issue": "", "pages": "4411--4421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020b. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411-4421. PMLR.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Indictransformers: An analysis of transformer language models for indian languages", "authors": [ { "first": "Kushal", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Adwait", "middle": [], "last": "Deshpande", "suffix": "" }, { "first": "Kumar", "middle": [], "last": "Shridhar", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kushal Jain, Adwait Deshpande, Kumar Shridhar, Fe- lix Laumann, and Ayushman Dash. 2020. Indic- transformers: An analysis of transformer language models for indian languages.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages", "authors": [ { "first": "Divyanshu", "middle": [], "last": "Kakwani", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Satish", "middle": [], "last": "Golla", "suffix": "" }, { "first": "N", "middle": [ "C" ], "last": "Gokul", "suffix": "" }, { "first": "Avik", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitesh", "suffix": "" }, { "first": "Pratyush", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4948--4961", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.445" ] }, "num": null, "urls": [], "raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 4948- 4961, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Subhash Chandra Bose Gali, Vish Subramanian, and Partha Talukdar. 2021. Muril: Multilingual representations for indian languages", "authors": [ { "first": "Simran", "middle": [], "last": "Khanuja", "suffix": "" }, { "first": "Diksha", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Sarvesh", "middle": [], "last": "Mehtani", "suffix": "" }, { "first": "Savya", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "Atreyee", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Balaji", "middle": [], "last": "Gopalan", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Kumar Margam", "suffix": "" }, { "first": "Pooja", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Teja Nagipogu", "suffix": "" }, { "first": "Shachi", "middle": [], "last": "Dave", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Gupta", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, and Partha Talukdar. 2021. Muril: Multilingual representations for indian languages.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The IndicNLP Library", "authors": [ { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anoop Kunchukuttan. 2020. The IndicNLP Library. https://github.com/anoopkunchukuttan/ indic_nlp_library/blob/master/docs/ indicnlp.pdf.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A probabilistic view of linguistic knowledge", "authors": [ { "first": "Acceptability", "middle": [], "last": "Grammaticality", "suffix": "" }, { "first": "", "middle": [], "last": "Probability", "suffix": "" } ], "year": null, "venue": "Cognitive Science", "volume": "41", "issue": "5", "pages": "1202--1241", "other_ids": { "DOI": [ "10.1111/cogs.12414" ] }, "num": null, "urls": [], "raw_text": "Grammaticality, acceptability, and probabil- ity: A probabilistic view of linguistic knowledge. Cognitive Science, 41(5):1202-1241.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, Fran\u00e7ois Lagunas", "authors": [ { "first": "Quentin", "middle": [], "last": "Lhoest", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Villanova Del Moral", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Suraj", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Patil", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Lewis", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Tunstall", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Gunjan", "middle": [], "last": "\u0160a\u0161ko", "suffix": "" }, { "first": "Bhavitvya", "middle": [], "last": "Chhablani", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Malik", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Brandeis", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Angelina", "middle": [], "last": "Patry", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Mcmillan-Major", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario \u0160a\u0161ko, Gun- jan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Cl\u00e9ment Delangue, Th\u00e9o Matus- si\u00e8re, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, Fran\u00e7ois Lagu- nas, Alexander M. Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation", "authors": [ { "first": "Yaobo", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Yeyun", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fenfei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Weizhen", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Linjun", "middle": [], "last": "Shou", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Guihong", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Ruofei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Sining", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Taroon", "middle": [], "last": "Bharti", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Jiun-Hung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Winnie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shuguang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Campos", "suffix": "" }, { "first": "Rangan", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6008--6018", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.484" ] }, "num": null, "urls": [], "raw_text": "Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fen- fei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Ran- gan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008-6018, Online. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "How language-neutral is multilingual bert? CoRR", "authors": [ { "first": "Jindrich", "middle": [], "last": "Libovick\u00fd", "suffix": "" }, { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jindrich Libovick\u00fd, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? CoRR, abs/1911.03310.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Open sesame: Getting inside BERT's linguistic knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": { "DOI": [ "10.18653/v1/W19-4825" ] }, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "How can we accelerate progress towards human-like linguistic generalization?", "authors": [ { "first": "", "middle": [], "last": "Tal Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5210--5217", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.465" ] }, "num": null, "urls": [], "raw_text": "Tal Linzen. 2020. How can we accelerate progress to- wards human-like linguistic generalization? In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210- 5217, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": { "DOI": [ "10.1162/tacl_a_00115" ] }, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": { "DOI": [ "10.18653/v1/N19-1112" ] }, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": { "DOI": [ "10.18653/v1/D18-1151" ] }, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3428--3448", "other_ids": { "DOI": [ "10.18653/v1/P19-1334" ] }, "num": null, "urls": [], "raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Cross-linguistic syntactic evaluation of word prediction models", "authors": [ { "first": "Aaron", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "Panayiota", "middle": [], "last": "Petrou-Zeniou", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Talmina", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5523--5539", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.490" ] }, "num": null, "urls": [], "raw_text": "Aaron Mueller, Garrett Nicolai, Panayiota Petrou- Zeniou, Natalia Talmina, and Tal Linzen. 2020. Cross-linguistic syntactic evaluation of word predic- tion models. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5523-5539, Online. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Hindi syntax: Annotating dependency, lexical predicate-argument structure, and phrase structure", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Rajesh", "middle": [], "last": "Bhatt", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Dipti", "middle": [ "Misra" ], "last": "Sharma", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2009, "venue": "The 7th International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "14--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Rajesh Bhatt, Bhuvana Narasimhan, Owen Rambow, Dipti Misra Sharma, and Fei Xia. 2009. Hindi syntax: Annotating dependency, lex- ical predicate-argument structure, and phrase struc- ture. In The 7th International Conference on Natu- ral Language Processing, pages 14-17.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Prague dependency style treebank for Tamil", "authors": [ { "first": "Loganathan", "middle": [], "last": "Ramasamy", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "\u017dabokrtsk\u00fd", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "1888--1894", "other_ids": {}, "num": null, "urls": [], "raw_text": "Loganathan Ramasamy and Zden\u011bk \u017dabokrtsk\u00fd. 2012. Prague dependency style treebank for Tamil. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 1888-1894, Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A primer in bertology: What we know about how bert works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "842--866", "other_ids": { "DOI": [ "10.1162/tacl_a_00349" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Is multilingual BERT fluent in language generation?", "authors": [ { "first": "Samuel", "middle": [], "last": "R\u00f6nnqvist", "suffix": "" }, { "first": "Jenna", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Salakoski", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing", "volume": "", "issue": "", "pages": "29--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R\u00f6nnqvist, Jenna Kanerva, Tapio Salakoski, and Filip Ginter. 2019. Is multilingual BERT flu- ent in language generation? In Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing, pages 29-36, Turku, Finland. Link\u00f6ping University Electronic Press.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "The importance of being recurrent for modeling hierarchical structure", "authors": [ { "first": "Ke", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4731--4736", "other_ids": { "DOI": [ "10.18653/v1/D18-1503" ] }, "num": null, "urls": [], "raw_text": "Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hi- erarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 4731-4736, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Morphology: Indian languages and european languages", "authors": [ { "first": "Shweta", "middle": [], "last": "Vikram", "suffix": "" } ], "year": 2013, "venue": "International Journal of Scientific and Research Publications", "volume": "3", "issue": "6", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shweta Vikram. 2013. Morphology: Indian languages and european languages. International Journal of Scientific and Research Publications, 3(6):1-5.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. CoRR, abs/1905.00537.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": { "DOI": [ "10.18653/v1/W18-5446" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "(a,b) -Comparing the linear word-order in English and Hindi; (b,c,d,e) -Constructing Colorless Green sentences in Hindi.", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Weighted F1 scores for the layer-wise probing experiments with the CG-HDTB test set for the mBERT, DistilmBERT, XLM-R, IndicBERT, and MuRIL models.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Example sentences from the CG-HDTB (left) and csCG-HDTB (right) treebank corresponding to an original sentence from the HDTB (top-left) treebank.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "Example sentences from the CG-TTB (left) and csCG-TTB (right) treebank corresponding to an original sentence from the TTB (top-left) treebank.", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "Weighted F1 scores for the layer-wise probing experiments with the CG-TTB test set for the POS task. Weighted F1 scores for the layer-wise probing experiments with the CG-TTB test set for the STDP task.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "text": "", "html": null, "num": null, "content": "
( English Translation of the Original Sentence )
( Original Sentence )
( Template Sentence without the Content Words )
( Colorless Green Variant of the Original Sentence )
( English Translation of the Colorless Green Sentence )
Feminine Singular[ NNP ]Proper Noun Singular
Masculine Plural[ NNS ]Noun Plural
Masculine Singular[ VB ]Verb (base form)
" }, "TABREF3": { "type_str": "table", "text": "The number of tokens (for POS and GCM tasks) and the number of sentences (for STDP and SVA tasks) in the HDTB and CG-HDTB treebank (the statistics are same for CG-HDTB and csCG-HDTB treebanks).", "html": null, "num": null, "content": "" }, "TABREF6": { "type_str": "table", "text": "The weighted-F1 scores for the last layer and the best layer (layer-number mentioned in parenthesis) for the layer-wise probing experiments with the Hindi language data. (Corresponding results for the Tamil language data can be found in Appendix B.1)", "html": null, "num": null, "content": "
" } } } }