{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:22:37.867905Z" }, "title": "SyntaxGym: An Online Platform for Targeted Evaluation of Language Models", "authors": [ { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "", "affiliation": { "laboratory": "", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "" }, { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harvard University", "location": {} }, "email": "wilcoxeg@g.harvard.edu" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "", "affiliation": { "laboratory": "", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "pqian@mit.edu" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "rplevy@mit.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Targeted syntactic evaluations have yielded insights into the generalizations learned by neural network language models. However, this line of research requires an uncommon confluence of skills: both the theoretical knowledge needed to design controlled psycholinguistic experiments, and the technical proficiency needed to train and deploy large-scale language models. We present SyntaxGym, an online platform designed to make targeted evaluations accessible to both experts in NLP and linguistics, reproducible across computing environments, and standardized following the norms of psycholinguistic experimental design. This paper releases two tools of independent value for the computational linguistics community: Recent work in evaluating neural network language models focuses on investigating models' fine-grained prediction behavior on carefully designed examples. Unlike broad-coverage language modeling metrics such as perplexity, these evaluations are targeted to reveal whether models have learned specific knowledge about the syntactic structure of language (see e.g.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Targeted syntactic evaluations have yielded insights into the generalizations learned by neural network language models. However, this line of research requires an uncommon confluence of skills: both the theoretical knowledge needed to design controlled psycholinguistic experiments, and the technical proficiency needed to train and deploy large-scale language models. We present SyntaxGym, an online platform designed to make targeted evaluations accessible to both experts in NLP and linguistics, reproducible across computing environments, and standardized following the norms of psycholinguistic experimental design. This paper releases two tools of independent value for the computational linguistics community: Recent work in evaluating neural network language models focuses on investigating models' fine-grained prediction behavior on carefully designed examples. Unlike broad-coverage language modeling metrics such as perplexity, these evaluations are targeted to reveal whether models have learned specific knowledge about the syntactic structure of language (see e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": ": SyntaxGym allows linguists to easily design and run controlled experiments on the syntactic knowledge of language models, and allows NLP experts to test their own models against these standards. Users submit targeted syntactic evaluation experiments to the site, and they are automatically evaluated on language models available in the Gym. SyntaxGym analyzes and visualizes these evaluation results. deploy large-scale neural network language models, and b) the linguistic knowledge of language scientists necessary to design controlled, theoretically interesting psycholinguistic experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In this paper, we introduce SyntaxGym: an online platform and open-source framework that makes targeted syntactic evaluations more accessible to experts in NLP and linguistics (Figure 1) . The core of SyntaxGym is a website, syntaxgym.org, that automates the entire evaluation pipeline: collecting tests and models, running evaluations, and displaying results through interactive visualizations. Language scientists can use the site to design and submit targeted syntactic evaluations, testing whether language models have derived human-like syntactic knowledge. Indepen-dently, NLP experts can submit their own language models for evaluation on these assays. By separating the tasks performed by these two user groups, the SyntaxGym site lowers the barrier to entry for the broader community of language researchers.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 186, "text": "(Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While SyntaxGym will serve as a centralized repository of syntactic evaluations and language models, we also release a set of command-line tools that allow users to reproduce the site's evaluations offline. The computation underlying the SyntaxGym site is structured around a commandline tool syntaxgym, which allows any user to run targeted syntactic evaluations on their own computer. We accomplish this by developing a new standard API for interacting with state-of-the-art neural network language models, operationalized in a second tool lm-zoo.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Taken together, these tools create a platform that makes the process of targeted syntactic evaluation more standardized, reproducible, and accessible to the broader communities of NLP experts and language scientists. Our goal is for SyntaxGym to facilitate the advancement of language model evaluation, leading to the development of models with more human-like linguistic knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Before presenting the SyntaxGym framework, we briefly introduce the targeted syntactic evaluation paradigm as a way to assess the quality of neural language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Standard left-to-right language models are trained to predict the next token given a context of previous tokens. Language models are typically assessed by their perplexity, the inverse geometric mean of the joint probability of words w 1 , . . . , w N in a held-out test corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perplexity", "sec_num": "2.1" }, { "text": "PPL(C) = p(w 1 , w 2 , . . . w N ) \u2212 1 N (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perplexity", "sec_num": "2.1" }, { "text": "However, a broad-coverage metric such as perplexity may not be ideal for assessing whether a language model has human-like syntactic knowledge. Recent empirical results suggest that models with similar perplexity measures can still exhibit substantial variance in syntactic knowledge (Hu et al., 2020; van Schijndel et al., 2019) , according to evaluation paradigms described in the next section.", "cite_spans": [ { "start": 284, "end": 301, "text": "(Hu et al., 2020;", "ref_id": "BIBREF5" }, { "start": 302, "end": 329, "text": "van Schijndel et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Perplexity", "sec_num": "2.1" }, { "text": "Alternatively, a language model can be evaluated on its ability to make human-like generalizations for specific syntactic phenomena. The targeted syntactic evaluation paradigm (Linzen et al., 2016; Lau et al., 2017; Gulordava et al., 2018; Marvin and Linzen, 2018; Futrell et al., 2019; Warstadt et al., 2020) incorporates methods from psycholinguistic experiments, designing sentences which hold most lexical and syntactic features of each sentence constant while minimally varying features that determine grammaticality or surprise characteristics of the sentence. For example, the following minimalpair sentences differ in subject-verb agreement:", "cite_spans": [ { "start": 176, "end": 197, "text": "(Linzen et al., 2016;", "ref_id": "BIBREF9" }, { "start": 198, "end": 215, "text": "Lau et al., 2017;", "ref_id": null }, { "start": 216, "end": 239, "text": "Gulordava et al., 2018;", "ref_id": "BIBREF3" }, { "start": 240, "end": 264, "text": "Marvin and Linzen, 2018;", "ref_id": "BIBREF10" }, { "start": 265, "end": 286, "text": "Futrell et al., 2019;", "ref_id": "BIBREF2" }, { "start": 287, "end": 309, "text": "Warstadt et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Targeted tests for syntactic generalization", "sec_num": "2.2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted tests for syntactic generalization", "sec_num": "2.2" }, { "text": "The farmer near the clerks knows many people.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted tests for syntactic generalization", "sec_num": "2.2" }, { "text": "(2) *The farmer near the clerks know many people.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Targeted tests for syntactic generalization", "sec_num": "2.2" }, { "text": "A model that has learned the proper subject-verb number agreement rules for English should assign a higher probability to the grammatical plural verb in the first sentence than to the ungrammatical singular verb in the second (Linzen et al., 2016) .", "cite_spans": [ { "start": 226, "end": 247, "text": "(Linzen et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Targeted tests for syntactic generalization", "sec_num": "2.2" }, { "text": "The targeted syntactic evaluation paradigm allows us to focus on highly specific measures of language modeling performance, which more directly distinguish models with human-like representations of syntactic structure. SyntaxGym was designed to serve as a central repository for these evaluations, and to make the evaluations reproducible and accessible for users without the necessary technical skills or computational resources. Section 3.1 first describes the standards we designed for specifying and executing these targeted syntactic evaluations. Section 3.2 then offers a tour of the SyntaxGym site, which is built around these standards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SyntaxGym", "sec_num": "3" }, { "text": "We represent targeted syntactic evaluations as test suites, visualized in Figure 2 . These test suites are the core component of psycholinguistic assessment, and should be familiar to those experienced in psycholinguistic experimental design. We will present the structure of a test suite using the running example of subject-verb agreement, introduced in the previous section. We describe the components of a test suite from bottom-up: Regions The atomic unit of a test suite is a region: a (possibly empty) string, such as the matrix verb region in Figure 2 . Regions can be concatenated to form full sentences.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 82, "text": "Figure 2", "ref_id": null }, { "start": 551, "end": 559, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Standardizing targeted syntactic evaluation", "sec_num": "3.1" }, { "text": "Conditions Regions vary systematically across experimental conditions, shown as colored pill shapes in Figure 2 . Here the matrix verb and np subj regions vary between their respective singular and plural forms, as described by the condition.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Standardizing targeted syntactic evaluation", "sec_num": "3.1" }, { "text": "Items Items are groups of related sentences which vary across experimental conditions. An item is characterized by its lexical content and takes different forms across conditions. For example, The farmer near the clerk knows and *The farmer near the clerk know are different sentences under two conditions of the same item.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standardizing targeted syntactic evaluation", "sec_num": "3.1" }, { "text": "Predictions Test suites are designed with a hypothesis in mind: if a model has correctly learned some relevant syntactic generalization, then it should assign higher probability to grammatical continuations of sentences. Test suite predictions operationalize these hypotheses as expected inequalities between total model surprisal values in different experimental conditions (i.e., between rows within item blocks in Figure 2 ). The SyntaxGym standard allows for arbitrarily complex disjunctions and conjunctions of such inequalities. Figure 2 shows a prediction with two inequalities between model surprisals at matrix verb across two pairs of conditions.", "cite_spans": [], "ref_spans": [ { "start": 417, "end": 425, "text": "Figure 2", "ref_id": null }, { "start": 535, "end": 543, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Standardizing targeted syntactic evaluation", "sec_num": "3.1" }, { "text": "We designed a standard JSON schema for describing the structure and content of test suites using the above concepts. Interested readers can find the full schema and documentation on the SyntaxGym site. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standardizing targeted syntactic evaluation", "sec_num": "3.1" }, { "text": "Reproducing research results with modern neural network architectures can be notoriously difficult, due to variance in computing environments and due to each individual project's tangled web of package dependencies. In addition, inconsistencies in data preprocessing -for example, in tokenization practices and the management of out-of-vocabulary items -often make it difficult to evaluate even the same model on different datasets. In order to address these difficulties, we designed a standardized API for interacting with trained language models, built to solve these reproducibility issues and allow for highly portable computing with state-of-the-art language models. Users can easily connect with this API through the lm-zoo command-line tool, described later in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A standard API for language models", "sec_num": "3.1.1" }, { "text": "The standard is built around the Docker containerization system. We expect each language model to be wrapped in a Docker image, including a thin API exposing a set of standardized binary commands: tokenize, which preprocesses natural-language sentences exactly as a language model expects; get-surprisals, which computes per-token language model surprisals on natural language input; and unkify, which indicates exactly which tokens in an input text file are invocabulary for the language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A standard API for language models", "sec_num": "3.1.1" }, { "text": "Language model creators or third-party maintainers can produce such Docker images wrapping language model code. At present, this API is designed to mainly serve the needs of the SyntaxGym evaluation process. In the future, however, we plan to extend the API for other common uses of language models: for example, to extract the nextword predictive distributions from the model, and to extract the model's internal word and sentence representations. This standard is documented in full at cpllab.github.io/lm-zoo.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A standard API for language models", "sec_num": "3.1.1" }, { "text": "The SyntaxGym website provides a centralized domain for collecting targeted syntactic evaluations and evaluating them on state-of-the-art language models. It provides intuitive, user-friendly tools for visualizing the behavior of any language model on any syntactic test suite, and also exposes all of the resulting raw data to interested advanced users. This section presents a brief tour through the major features of the SyntaxGym site.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The SyntaxGym website", "sec_num": "3.2" }, { "text": "Create test suites Non-technical users can use SyntaxGym's browser-based interface to design and submit their own psycholinguistic test suites (Figure 3) . Separately, the site supports uploading pre-made test suites as a JSON-formatted file. This functionality may be useful for advanced users who prefer to automatically generate test suites. 2 Submit language models Users interested in evaluating their own language models first create a public Docker image conforming to the API specified by the SyntaxGym standard (Section 3.1.1). After users submit these language models on the SyntaxGym site, the models are automatically validated for conformity to the API by the SyntaxGym backend. Valid models are added to the SyntaxGym collection, and will be evaluated on all past and future available test suites in the Gym.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 153, "text": "(Figure 3)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The SyntaxGym website", "sec_num": "3.2" }, { "text": "Automatic evaluation Whenever novel test suites or language models are submitted, SyntaxGym automatically evaluates the relevant suites and models in the cloud. For each test suite and model, the evaluation yields a prediction accuracy -the number of items for which the prediction holds. These prediction accuracies, along with the raw surprisal data, are stored in the SyntaxGym database and made available in visualizations such as Figure 4b .", "cite_spans": [], "ref_spans": [ { "start": 435, "end": 444, "text": "Figure 4b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The SyntaxGym website", "sec_num": "3.2" }, { "text": "Visualization and data analysis The site provides a variety of interactive charts that allow users to visualize results at different levels of granularity. On the coarsest level, users can compare aggregate performance across language models and groups of theoretically related test suites called tags (see Figure 1 ). Users can also compare accuracy across models on a single test suite (Figure 4a) , across tags for a single model, and across test suites within a single tag. On the finest level, users can view raw region-by-region surprisal values to analyze indepth performance of a particular language model on a particular test suite (Figure 4b ).", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 315, "text": "Figure 1", "ref_id": null }, { "start": 388, "end": 399, "text": "(Figure 4a)", "ref_id": "FIGREF1" }, { "start": 641, "end": 651, "text": "(Figure 4b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The SyntaxGym website", "sec_num": "3.2" }, { "text": "We have seeded the SyntaxGym website with a collection of test suites and language models by aggregating prior research. These materials and relevant evaluation results are separately presented in Hu et al. (2020) . Here we provide only a brief summary in order to illustrate the features of the SyntaxGym website.", "cite_spans": [ { "start": 197, "end": 213, "text": "Hu et al. (2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Seed data and results", "sec_num": "3.3" }, { "text": "1. We wrapped 8 modern neural network language models (summarized in Interested readers can find more details on these test suites and language models, along with the evaluation results and visualizations, on the SyntaxGym site.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seed data and results", "sec_num": "3.3" }, { "text": "While the SyntaxGym website allows for easy centralization of test suites and public access to results, all of its underlying infrastructure is also available independently for researchers to use. We release two command-line tools, available to any user with Python and Docker installed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Command-line tools", "sec_num": "4" }, { "text": "We first designed a general command-line tool for interacting with state-of-the-art neural language models, called lm-zoo. Figure 5b demonstrates how this tool can be used to easily extract prediction data from an arbitrary language model. Full documentation and installation instructions are available at cpllab.github.io/lm-zoo.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 132, "text": "Figure 5b", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "lm-zoo: black-box access to SOTA language models", "sec_num": "4.1" }, { "text": "Users can completely reproduce the targeted syntactic evaluation paradigm of SyntaxGym outside of the website using a second command-line tool, syntaxgym, shown in Figure 5a . This tool does the work of converting test suites into actual natural-language sentences appropriately formatted for a particular language model, executing the model, and mapping the results back to a SyntaxGym-friendly format ready for analysis. It deals with the wide variation in tokenization and out-of-vocabulary token handling across models. Full documentation and installation instructions are available at syntaxgym.org/cli.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 173, "text": "Figure 5a", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "syntaxgym: targeted syntactic evaluation", "sec_num": "4.2" }, { "text": "Marvin and Linzen (2018) release a dataset of minimal-pair sentences designed to test language models' syntactic generalization capabilities. However, the syntactic coverage of the dataset is limited to a small set of phenomena: subject-verb agreement, reflexive anaphor licensing, and negative polarity items. Warstadt et al. (2020) release a large dataset aggregating a broad collection of targeted syntactic evaluations from prior research, known as BLiMP. Like the Marvin and Linzen dataset, BLiMP consists of a collection of minimal-pair sentences which contrast in grammaticality, following the standard shown in Examples (1) and (2). The BLiMP evaluation requires that language models assign a higher total probability to the grammatical (1) than the ungrammatical (2). The authors design abstract templates which specify grammatical- ungrammatical pairs for many linguistic phenomena, and then generate example sentences based on these templates.", "cite_spans": [ { "start": 311, "end": 333, "text": "Warstadt et al. (2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "While BLiMP and SyntaxGym are similarly motivated, they differ slightly in methodology. First, BLiMP requires models to satisfy only a single inequality between sentence probabilities. While the SyntaxGym system can support such predictions, it is designed to support much stricter tests of language models, such as the conjunction of inequalities across multiple conditions (see Figure 2 ). Second, BLiMP compares model judgments about total sentence probabilities. In contrast, SyntaxGym is designed to compare model judgments only in critical test regions, which allows us to more fairly evaluate language model predictions only in prespecified spans of interest. Finally, the BLiMP sentences are automatically generated from abstract grammars exemplifying syntactic phenomena of interest. Since automatic methods can easily yield a large number of sentences, they can help us control for other possible sources of noise in test materials. However, many grammatical phenomena of interest are fiendishly difficult to capture in abstract grammars, and require careful design by native speakers. 3 This BLiMP data is thus complementary to the hand-designed test suites currently presented on the SyntaxGym site. We plan to adapt such large-scale test suites on SyntaxGym in the future.", "cite_spans": [ { "start": 1096, "end": 1097, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 380, "end": 388, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "This paper presented SyntaxGym, an online platform and open-source framework for targeted syntactic evaluation of neural network language models. SyntaxGym promises to advance the progress of language model evaluation by uniting the theoretical expertise of linguists with the technical skills of NLP researchers. The site is fully functional at syntaxgym.org, and the entire framework is available as open-source code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "SyntaxGym is continually evolving: we plan to add new features to the site, and to develop further in response to user feedback. In particular, we plan to incorporate human performance as a reference metric, integrating psycholinguistic experimental results and supporting easy experimental design starting from the test suite format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We also plan to further incorporate language models into the lm-zoo tool, allowing broader access to state-of-the-art language models in general. We welcome open-source contributions to the website and to the general framework, and especially encourage the NLP community to contribute their models to the lm-zoo repository.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://docs.syntaxgym.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In a future release, we will also allow users to import test suites from spreadsheets as CSV-formatted files.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, one such phenomenon is the garden-path disambiguation effect(Futrell et al., 2019), which is highly sensitive to nuanced lexical and world-knowledge features of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments J.G. is supported by an Open Philanthropy AI Fellowship. J.H. is supported by the NIH under award number T32NS105587 and an NSF Graduate Research Fellowship. R.L. is supported by a Google Faculty Research Award. This work was also supported by the MIT-IBM Watson AI Lab.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "G", "middle": [], "last": "Jaime", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2978--2988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2978-2988.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Recurrent neural network grammars", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "199--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. In Proceedings of NAACL-HLT, pages 199-209.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural language models as psycholinguistic subjects: Representations of syntactic state", "authors": [ { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "32--42", "other_ids": { "DOI": [ "10.18653/v1/N19-1004" ] }, "num": null, "urls": [], "raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic sub- jects: Representations of syntactic state. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32-42, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "1195--1205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL-HLT, pages 1195-1205.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A systematic assessment of syntactic generalization in neural language models", "authors": [ { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In Proceedings of the Association of Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Exploring the limits of language modeling", "authors": [ { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.02410" ] }, "num": null, "urls": [], "raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A probabilistic view of linguistic knowledge", "authors": [ { "first": "Acceptability", "middle": [], "last": "Grammaticality", "suffix": "" }, { "first": "", "middle": [], "last": "Probability", "suffix": "" } ], "year": null, "venue": "Cognitive Science", "volume": "5", "issue": "", "pages": "1202--1247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grammaticality, acceptability, and probabil- ity: A probabilistic view of linguistic knowledge. Cognitive Science, 5:1202-1247.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "In Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. In Transactions of the Association for Computational Linguistics, vol- ume 4, pages 521-535.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Quantity doesn't buy quality syntax with neural language models", "authors": [ { "first": "Aaron", "middle": [], "last": "Marten Van Schijndel", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5835--5841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5835-5841.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Shawn", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrat- ing tree structures into recurrent neural networks. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "BLiMP: A benchmark of linguistic minimal pairs for English", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Society for Computation in Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020. BLiMP: A benchmark of linguis- tic minimal pairs for English. In Proceedings of the Society for Computation in Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Non-technical users can design their own test suites with a browser-based form.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Screenshots of example visualizations from the SyntaxGym website.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "(b) The lm-zoo tool provides lower-level access to SyntaxGym-hosted language models, allowing users to retrieve models' predictions, tokenization choices, and more.", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "We built SyntaxGym around command-line tools for probing and evaluating neural network language models, which can be used independently of the SyntaxGym site.", "uris": null }, "TABREF1": { "html": null, "content": "
) to be
", "type_str": "table", "text": "", "num": null }, "TABREF2": { "html": null, "content": "", "type_str": "table", "text": "Language models currently supported in the SyntaxGym framework.", "num": null }, "TABREF3": { "html": null, "content": "
# Evaluate model \"gpt-2\" on suite \"mvrr\" $ syntaxgym evaluate gpt-2 mvrr Accuracy: 0.7857 (22/28 correct) # Evaluate arbitrary model on custom suite $ syntaxgym evaluate \\ > docker://me/my-model my-suite.json Accuracy: 0.575 (23/40 correct) (a) $ $ lm-zoo get-surprisals transformer-xl foo.txt sentence_id token_id token surprisal 1 1 This 0.0000 1 1 is 4.1239 1 1 a 1.0126 ...
", "type_str": "table", "text": "$ syntaxgym list models gpt-2, gpt-2-xl, transformer-xl, ... $ syntaxgym list suites number-orc, number-src, mvrr, ... The syntaxgym tool allows users to evaluate language models on test suites -both models and suites hosted by SyntaxGym, and models and suites created by the user. echo \"This is a sentence.\" > foo.txt $ lm-zoo list models gpt-2, gpt-2-xl, transformer-xl, ... $ lm-zoo tokenize transformer-xl foo.txt This is a sentence .", "num": null } } } }