{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:46:13.583834Z" }, "title": "jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models", "authors": [ { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Phil", "middle": [], "last": "Yeres", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Phu", "middle": [ "Mon" ], "last": "Htut", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": {} }, "email": "bowman@nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce jiant, an open source toolkit for conducting multitask and transfer learning experiments on English NLU tasks. jiant enables modular and configuration-driven experimentation with state-of-the-art models and implements a broad set of tasks for probing, transfer learning, and multitask training experiments. jiant implements over 50 NLU tasks, including all GLUE and SuperGLUE benchmark tasks. We demonstrate that jiant reproduces published performance on a variety of tasks and models, including BERT and RoBERTa. jiant is available at https:// jiant.info. * Equal contribution. 1 The name jiant stands for \"jiant is an NLP toolkit\".", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We introduce jiant, an open source toolkit for conducting multitask and transfer learning experiments on English NLU tasks. jiant enables modular and configuration-driven experimentation with state-of-the-art models and implements a broad set of tasks for probing, transfer learning, and multitask training experiments. jiant implements over 50 NLU tasks, including all GLUE and SuperGLUE benchmark tasks. We demonstrate that jiant reproduces published performance on a variety of tasks and models, including BERT and RoBERTa. jiant is available at https:// jiant.info. * Equal contribution. 1 The name jiant stands for \"jiant is an NLP toolkit\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper introduces jiant, 1 an open source toolkit that allows researchers to quickly experiment on a wide array of NLU tasks, using state-of-the-art NLP models, and conduct experiments on probing, transfer learning, and multitask training. jiant supports many state-of-the-art Transformer-based models implemented by Huggingface's Transformers package, as well as non-Transformer models such as BiLSTMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Packages and libraries like HuggingFace's Transformers (Wolf et al., 2019) and AllenNLP (Gardner et al., 2017) have accelerated the process of experimenting and iterating on NLP models by both abstracting out implementation details, and simplifying the model training pipeline. jiant extends the capabilities of both toolkits by presenting a wrapper that implements a variety of complex experimental pipelines in a scalable and easily controllable setting. jiant contains a task bank of over 50 tasks, including all the tasks presented in GLUE (Wang et al., 2018) , SuperGLUE , the edge-probing suite (Tenney et al., 2019b) , and the SentEval probing suite (Conneau and Kiela, 2018) , as well as other individual tasks including CCG supertagging (Hockenmaier and Steedman, 2007) , SocialIQA (Sap et al., 2019) , and CommonsenseQA (Talmor et al., 2019) . jiant is also the official baseline codebase for the Super-GLUE benchmark.", "cite_spans": [ { "start": 55, "end": 74, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF38" }, { "start": 88, "end": 110, "text": "(Gardner et al., 2017)", "ref_id": "BIBREF6" }, { "start": 544, "end": 563, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" }, { "start": 601, "end": 623, "text": "(Tenney et al., 2019b)", "ref_id": "BIBREF32" }, { "start": 657, "end": 682, "text": "(Conneau and Kiela, 2018)", "ref_id": "BIBREF3" }, { "start": 746, "end": 778, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF9" }, { "start": 791, "end": 809, "text": "(Sap et al., 2019)", "ref_id": "BIBREF27" }, { "start": 830, "end": 851, "text": "(Talmor et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "jiant's core design principles are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Ease of use: jiant should allow users to run a variety of experiments using state-of-the-art models via an easy to use configuration-driven interface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Reproducibility: jiant should provide features that support correct and reproducible experiments, including logging and saving and restoring model state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Availability of NLU tasks: jiant should maintain and continue to grow a collection of tasks useful for NLU research, especially popular evaluation tasks and tasks commonly used in pretraining and transfer learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Availability of cutting-edge models: jiant should make implementations of state-of-theart models available for experimentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Open source: jiant should be free to use, and easy to contribute to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early versions of jiant have already been used in multiple works, including probing analyses (Tenney et al., 2019b,a; Warstadt et al., 2019; Hewitt and Manning, 2019; Jawahar et al., 2019) , transfer learning experiments (Wang et al., 2019a; Phang et al., 2018) , and dataset and benchmark construction (Wang et al., , 2018 . ", "cite_spans": [ { "start": 93, "end": 117, "text": "(Tenney et al., 2019b,a;", "ref_id": null }, { "start": 118, "end": 140, "text": "Warstadt et al., 2019;", "ref_id": "BIBREF36" }, { "start": 141, "end": 166, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF7" }, { "start": 167, "end": 188, "text": "Jawahar et al., 2019)", "ref_id": "BIBREF11" }, { "start": 221, "end": 241, "text": "(Wang et al., 2019a;", "ref_id": null }, { "start": 242, "end": 261, "text": "Phang et al., 2018)", "ref_id": "BIBREF22" }, { "start": 303, "end": 323, "text": "(Wang et al., , 2018", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Transfer learning is an area of research that uses knowledge from pretrained models to transfer to new tasks. In recent years, Transformer-based models like BERT (Devlin et al., 2019) and T5 (Raffel et al., 2019) have yielded state-of-the-art results on the lion's share of benchmark tasks for language understanding through pretraining and transfer, often paired with some form of multitask learning.", "cite_spans": [ { "start": 162, "end": 183, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 191, "end": 212, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "jiant enables a variety of complex training pipelines through simple configuration changes, including multi-task training (Caruana, 1993; Liu et al., 2019a) and pretraining, as well as the sequential fine-tuning approach from STILTs (Phang et al., 2018) . In STILTs, intermediate task training takes a pretrained model like ELMo or BERT, and applies supplementary training on a set of intermediate tasks, before finally performing single-task training on additional downstream tasks.", "cite_spans": [ { "start": 122, "end": 137, "text": "(Caruana, 1993;", "ref_id": "BIBREF1" }, { "start": 138, "end": 156, "text": "Liu et al., 2019a)", "ref_id": "BIBREF16" }, { "start": 233, "end": 253, "text": "(Phang et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "3 jiant System Overview 3.1 Requirements and Deployment jiant can be cloned and installed from GitHub: https://github.com/nyu-mll/jiant. jiant v1.3.0 requires Python 3.5 or later, and jiant's core dependencies are PyTorch (Paszke et al., 2019) , AllenNLP (Gardner et al., 2017) , and HuggingFace's Transformers (Wolf et al., 2019) . jiant is released under the MIT License (Open Source Initiative, 2020). jiant runs on consumergrade hardware or in cluster environments with or without CUDA GPUs. The jiant repository also contains documentation and configuration files demonstrating how to deploy jiant in Kubernetes clusters on Google Kubernetes Engine.", "cite_spans": [ { "start": 222, "end": 243, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF19" }, { "start": 255, "end": 277, "text": "(Gardner et al., 2017)", "ref_id": "BIBREF6" }, { "start": 311, "end": 330, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "\u2022 Tasks: Tasks have references to task data, methods for processing data, references to classifier heads, and methods for calculating performance metrics, and making predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Components", "sec_num": "3.2" }, { "text": "\u2022 Sentence Encoder: Sentence encoders map from the indexed examples to a sentence-level representation. Sentence encoders can include an input module (e.g., Transformer models, ELMo, or word embeddings), followed by an optional second layer of encoding (usually a BiLSTM). Examples of possible sentence encoder configurations include BERT, ELMo followed by a BiLSTM, BERT with a variety of pooling and aggregation methods, or a bag of words model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Components", "sec_num": "3.2" }, { "text": "\u2022 Task-Specific Output Heads: Task-specific output modules map representations from sentence encoders to outputs specific to a task, e.g. entailment/neutral/contradiction for NLI tasks, or tags for part-of-speech tagging. They also include logic for computing the corresponding loss for training (e.g. cross-entropy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Components", "sec_num": "3.2" }, { "text": "\u2022 Trainer: Trainers manage the control flow for the training and validation loop for experiments. They sample batches from one or more tasks, perform forward and backward passes, calculate training metrics, evaluate on a validation set, and save checkpoints. Users can specify experiment-specific parameters such as learning rate, batch size, and more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Components", "sec_num": "3.2" }, { "text": "\u2022 Config: Config files or flags are defined in HOCON 2 format. Configs specify parameters for jiant experiments including choices of tasks, sentence encoder, and training routine. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Components", "sec_num": "3.2" }, { "text": "Configs are jiant's primary user interface. Tasks and modeling components are designed to be modular, while jiant's pipeline is a monolithic, configuration-driven design intended to facilitate a number of common workflows outlined in 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Components", "sec_num": "3.2" }, { "text": "jiant's core pipeline consists of the five stages described below and illustrated in 2. The tasks and sentence encoder are prepared: (a) The task data is loaded, tokenized, and indexed, and the preprocessed task objects are serialized and cached. In this process, AllenNLP is used to create the vocabulary and index the tokenized data. (b) The sentence encoder is constructed and (optionally) pretrained weights are loaded. 4 (c) The task-specific output heads are created for each task, and task heads are attached to a common sentence encoder. Optionally, different tasks can share the same output head, as in Liu et al. (2019a) .", "cite_spans": [ { "start": 612, "end": 630, "text": "Liu et al. (2019a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "jiant Pipeline Overview", "sec_num": "3.3" }, { "text": "3. Optionally, in the intermediate phase the trainer samples batches randomly from one or more tasks, 5 and trains the shared model. 4. Optionally, in the target training phase, a copy of the model is configured and trained or finetuned for each target task separately.", "cite_spans": [ { "start": 102, "end": 103, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "jiant Pipeline Overview", "sec_num": "3.3" }, { "text": "5. Optionally, the model is evaluated on the validation and/or test sets of the target tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "jiant Pipeline Overview", "sec_num": "3.3" }, { "text": "jiant supports over 50 tasks. Task types include classification, regression, sequence generation, tagging, masked language modeling, and span prediction. jiant focuses on NLU tasks like MNLI (Williams et al., 2018) , CommonsenseQA (Talmor et al., 2019), the Winograd Schema Challenge (Levesque et al., 2012) , and SQuAD (Rajpurkar et al., 2016) . A full inventory of tasks and task variants is available in the jiant/tasks module. jiant provides support for cutting-edge sentence encoder models, including support for Huggingface's Transformers. Supported models include: ELMo (Peters et al., 2018) , GPT (Radford, 2018) , BERT (Devlin et al., 2019) , XLM (Conneau and Lample, 2019), GPT-2 (Radford et al., 2019) , XLNet , RoBERTa (Liu et al., 2019b) , and ALBERT (Lan et al., 2019) . jiant also supports the from-scratch training of (bidirectional) LSTMs (Hochreiter and Schmidhuber, 1997) and deep bag of words models (Iyyer et al., 2015) , as well as syntax-aware models such as PRPN (Shen et al., 2018) and ON-LSTM (Shen et al., 2019) . jiant also supports word embeddings such as GloVe (Pennington et al., 2014) .", "cite_spans": [ { "start": 191, "end": 214, "text": "(Williams et al., 2018)", "ref_id": "BIBREF37" }, { "start": 284, "end": 307, "text": "(Levesque et al., 2012)", "ref_id": "BIBREF14" }, { "start": 320, "end": 344, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" }, { "start": 577, "end": 598, "text": "(Peters et al., 2018)", "ref_id": "BIBREF21" }, { "start": 605, "end": 620, "text": "(Radford, 2018)", "ref_id": "BIBREF23" }, { "start": 628, "end": 649, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 690, "end": 712, "text": "(Radford et al., 2019)", "ref_id": "BIBREF24" }, { "start": 731, "end": 750, "text": "(Liu et al., 2019b)", "ref_id": null }, { "start": 764, "end": 782, "text": "(Lan et al., 2019)", "ref_id": "BIBREF13" }, { "start": 856, "end": 890, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF8" }, { "start": 920, "end": 940, "text": "(Iyyer et al., 2015)", "ref_id": "BIBREF10" }, { "start": 987, "end": 1006, "text": "(Shen et al., 2018)", "ref_id": "BIBREF28" }, { "start": 1019, "end": 1038, "text": "(Shen et al., 2019)", "ref_id": "BIBREF29" }, { "start": 1091, "end": 1116, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Task and Model resources in jiant", "sec_num": "3.4" }, { "text": "jiant experiments can be run with a simple CLI:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Interface", "sec_num": "3.5" }, { "text": "python -m jiant \\ --config_file roberta_with_mnli.conf \\ --overrides \"target_tasks = swag, \\ run_name = swag_01\" jiant provides default config files that allow running many experiments without modifying source code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Interface", "sec_num": "3.5" }, { "text": "jiant also provides baseline config files that can serve as a starting point for model development and evaluation against GLUE (Wang et al., 2018) and SuperGLUE benchmarks.", "cite_spans": [ { "start": 127, "end": 146, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "User Interface", "sec_num": "3.5" }, { "text": "More advanced configurations can be developed by composing multiple configurations files and overrides. Figure 3 shows a config file that overrides a default config, defining an experiment that uses BERT as the sentence encoder. This config includes an example of a task-specific configuration, which can be overridden in another config file or via a command line override.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "User Interface", "sec_num": "3.5" }, { "text": "Because jiant implements the option to provide command line overrides with a flag, it is easy to write scripts that launch jiant experiments over a range of parameters, for example while performing grid search across hyperparameters. jiant users have successfully run large-scale experiments launching hundreds of runs on both Kubernetes and Slurm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Interface", "sec_num": "3.5" }, { "text": "Here we highlight some example use cases and key corresponding jiant config options required in these experiments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example jiant Use Cases and Options", "sec_num": "3.6" }, { "text": "\u2022 Fine-tune BERT on SWAG (Zellers et al., 2018) and SQUAD (Rajpurkar et al., 2016) , then fine-tune on HellaSwag (Zellers et al., 2019) : ", "cite_spans": [ { "start": 25, "end": 47, "text": "(Zellers et al., 2018)", "ref_id": "BIBREF40" }, { "start": 58, "end": 82, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" }, { "start": 113, "end": 135, "text": "(Zellers et al., 2019)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Example jiant Use Cases and Options", "sec_num": "3.6" }, { "text": "input_module =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example jiant Use Cases and Options", "sec_num": "3.6" }, { "text": "jiant implements features that improve run stability and efficiency:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "\u2022 jiant implements checkpointing options designed to offer efficient early stopping and to show consistent behavior when restarting after an interruption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "\u2022 jiant caches preprocessed task data to speed up reuse across experiments which share common data resources and artifacts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "\u2022 jiant implements gradient accumulation and multi-GPU, which enables training on larger batches than can fit in memory for a single GPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "\u2022 jiant supports outputting predictions in a format ready for GLUE and SuperGLUE benchmark submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "\u2022 jiant generates custom log files that capture experimental configurations, training and evaluation metrics, and relevant run-time information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "\u2022 jiant generates TensorBoard event files (Abadi et al., 2015) for training and evaluation metric tracking. TensorBoard event files can be visualized using the TensorBoard Scalars Dashboard.", "cite_spans": [ { "start": 42, "end": 62, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Optimizations and Other Features", "sec_num": "3.7" }, { "text": "jiant's design offers conveniences that reduce the need to modify code when making changes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensibility", "sec_num": "3.8" }, { "text": "\u2022 jiant's task registry makes it easy to define a new version of an existing task using different data. Once the new task is defined in the task registry, the task is available as an option in jiant's config.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensibility", "sec_num": "3.8" }, { "text": "\u2022 jiant's sentence encoder and task output head abstractions allow for easy support of new sentence encoders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensibility", "sec_num": "3.8" }, { "text": "In use cases requiring the introduction of a new task, users can use class inheritance to build on a number of available parent task types including classification, tagging, span prediction, span classification, sequence generation, regression, ranking, and multiple choice task classes. For these task types, corresponding task-specific output heads are already implemented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensibility", "sec_num": "3.8" }, { "text": "More than 30 researchers and developers from more than 5 institutions have contributed code to the jiant project. 6 jiant's maintainers welcome pull requests that introduce new tasks or sentence encoder components, and pull request are actively reviewed. The jiant repository's continuous integration system requires that all pull requests pass unit and integration tests and meet Black 7 code formatting requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensibility", "sec_num": "3.8" }, { "text": "While jiant is quite flexible in the pipelines that can be specified through configs, and some components are highly modular (e.g., tasks, sentence encoders, and output heads), modification of the pipeline code can be difficult. For example, training in more than two phases would require modifying the trainer code. 8 Making multi-stage training configurations more flexible is on jiant's development roadmap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Development Roadmap", "sec_num": "3.9" }, { "text": "jiant's development roadmap prioritizes adding support for new Transformer models, and adding tasks that are commonly used for pretraining and evaluation in NLU. Additionally, there are plans to make jiant's training phase configuration options more flexible to allow training in more than two phases, and to continue to refactor jiant's code to keep jiant flexible to track developments in NLU research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Development Roadmap", "sec_num": "3.9" }, { "text": "To benchmark jiant, we perform a set of experiments that reproduce external results for single fine-tuning and transfer learning experiments. jiant has been benchmarked extensively in both published and ongoing work on a majority of the implemented tasks. We benchmark single-task fine-tuning configurations using CommonsenseQA (Talmor et al., 2019) and SocialIQA (Sap et al., 2019) . On Common-senseQA with RoBERTa LARGE , jiant achieves an accuracy of 72.2, comparable to 72.1 reported by Liu et al. (2019b) . On SocialIQA with BERTlarge, jiant achieves a dev set accuracy of 65.8, comparable to 66.0 reported in Sap et al. (2019) .", "cite_spans": [ { "start": 328, "end": 349, "text": "(Talmor et al., 2019)", "ref_id": "BIBREF30" }, { "start": 364, "end": 382, "text": "(Sap et al., 2019)", "ref_id": "BIBREF27" }, { "start": 491, "end": 509, "text": "Liu et al. (2019b)", "ref_id": null }, { "start": 615, "end": 632, "text": "Sap et al. (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Experiments", "sec_num": "4" }, { "text": "Next, we benchmark jiant's transfer learning regime. We perform transfer experiments from MNLI to BoolQ with BERT-large. In this configuration Clark et al. (2019) demonstrated an accuracy improvement of 78.1 to 82.2 on the dev set, and jiant achieves an improvement of 78.1 to 80.3.", "cite_spans": [ { "start": 143, "end": 162, "text": "Clark et al. (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Experiments", "sec_num": "4" }, { "text": "jiant provides a configuration-driven interface for defining transfer learning and representation learning experiments using a bank of over 50 NLU tasks, cutting-edge sentence encoder models, and multi-task and multi-stage training procedures. Further, jiant is shown to be able to replicate published performance on various NLU tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "jiant's modular design of task and sentence encoder components make it possible for users to quickly and easily experiment with a large number of tasks, models, and parameter configurations, without editing source code. jiant's design also makes it easy to add new tasks, and jiant's architecture makes it convenient to extend jiant to support new sentence encoders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "jiant code is open source, and jiant invites contributors to open issues or submit pull request to the jiant project repository: https://github. com/nyu-mll/jiant. Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, and Ellie Pavlick have contributed substantial engineering assistance to the project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The early development of jiant took at the 2018 Frederick Jelinek Memorial Summer Workshop on Speech and Language Technologies, and was supported by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, Microsoft and Mitsubishi Electric Research Laboratories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Subsequent development was possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program, by support from Intuit Inc., and by support from Samsung Research under the project Improving Deep Learning using Latent Structure. We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU in this work. Alex Wang's work on the project is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1342536. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Yada Pruksachatkun's work on the project is supported in part by the Moore-Sloan Data Science Environment as part of the NYU Data Science Services initiative. Sam Bowman's work on jiant during Summer 2019 took place in his capacity as a visiting researcher at Google.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Human-Optimized Config Object Notation (lightbend, 2011). jiant uses HOCON's logic to consolidate multiple config files and command-line overrides into a single run config.3 jiant configs support multi-phase training routines as described in section 3.3 and illustrated inFigure 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The sentence encoder's weights can optionally be left frozen, or be included in the training procedure.5 Tasks can be sampled using a variety of sample weighting methods, e.g., uniform or proportional to the tasks' number of training batches or examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/nyu-mll/jiant/ graphs/contributors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/psf/black 8 While not supported by config options, training in more than two phases is possible by using jiant's checkpointing features to reload models for additional rounds of training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Edouard Grave, Najoung Kim, Thibault F\u00e9vry, Berlin Chen, Nikita Nangia, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur ; Martin Wicke", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaoqiang", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefow- icz, Lukasz Kaiser, Manjunath Kudlur, Josh Leven- berg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Mar- tin Wattenberg, Martin Wicke, Yuan Yu, and Xiao- qiang Zheng. 2015. TensorFlow: Large-scale ma- chine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multitask learning: A knowledgebased source of inductive bias", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1993, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1993. Multitask learning: A knowledge- based source of inductive bias. In ICML.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BoolQ: Exploring the surprising difficulty of natural yes/no questions", "authors": [ { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2924--2936", "other_ids": { "DOI": [ "10.18653/v1/N19-1300" ] }, "num": null, "urls": [], "raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924-2936, Min- neapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "SentEval: An evaluation toolkit for universal sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence repre- sentations. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "7057--7067", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems 32, pages 7057-7067.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "AllenNLP: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A deep semantic natural language processing platform. Unpublished manuscript avail- able on arXiv.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "355--396", "other_ids": { "DOI": [ "10.1162/coli.2007.33.3.355" ] }, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank. Com- putational Linguistics, 33(3):355-396.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep unordered composition rivals syntactic methods for text classification", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1681--1691", "other_ids": { "DOI": [ "10.3115/v1/P15-1162" ] }, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1681-1691, Beijing, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Probing what different NLP tasks teach machines about function word comprehension", "authors": [ { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)", "volume": "", "issue": "", "pages": "235--249", "other_ids": { "DOI": [ "10.18653/v1/S19-1026" ] }, "num": null, "urls": [], "raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bow- man, and Ellie Pavlick. 2019. Probing what dif- ferent NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Se- mantics (*SEM 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Winograd schema challenge", "authors": [ { "first": "Hector", "middle": [ "J" ], "last": "Levesque", "suffix": "" }, { "first": "Ernest", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Leora", "middle": [], "last": "Morgenstern", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'12", "volume": "", "issue": "", "pages": "552--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hector J. Levesque, Ernest Davis, and Leora Mor- genstern. 2012. The Winograd schema challenge. In Proceedings of the Thirteenth International Con- ference on Principles of Knowledge Representa- tion and Reasoning, KR'12, pages 552-561. AAAI Press. lightbend. 2011. HOCON (human-optimized con- fig object notation).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Open sesame: Getting inside BERT's linguistic knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": { "DOI": [ "10.18653/v1/W19-4825" ] }, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multi-task deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": { "DOI": [ "10.18653/v1/P19-1441" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Open Source Initiative. 2020. The MIT License", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Open Source Initiative. 2020. The MIT License.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Py-Torch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- Torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks. Unpublished manuscript available on arXiv", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "F\u00e9vry", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R. Bowman. 2018. Sentence Encoders on STILTs: Supplemen- tary Training on Intermediate Labeled-data Tasks. Unpublished manuscript available on arXiv.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford. 2018. Improving language understand- ing by generative pre-training.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. Unpublished manuscript available on arXiv.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Social IQa: Commonsense reasoning about social interactions", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4463--4473", "other_ids": { "DOI": [ "10.18653/v1/D19-1454" ] }, "num": null, "urls": [], "raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463- 4473, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Neural language modeling by jointly learning syntax and lexicon", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhouhan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Chin-Wei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron C. Courville. 2018. Neural language model- ing by jointly learning syntax and lexicon. In 6th International Conference on Learning Representa- tions, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Shawn", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2019. Ordered neurons: Inte- grating tree structures into recurrent neural networks. In 7th International Conference on Learning Repre- sentations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4149--4158", "other_ids": { "DOI": [ "10.18653/v1/N19-1421" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Bowman. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hula", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Raghavendra", "middle": [], "last": "Pappagari", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Yinghui", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Katherin", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Shuning", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "", "suffix": "" } ], "year": null, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4465--4476", "other_ids": { "DOI": [ "10.18653/v1/P19-1439" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pap- pagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bow- man. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4465-4476, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "SuperGLUE: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "3261--3275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019b. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Infor- mation Processing Systems 32, pages 3261-3275.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": { "DOI": [ "10.18653/v1/W18-5446" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Investigating BERT's knowledge of language: Five analysis methods with NPIs", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ioana", "middle": [], "last": "Grosu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "Blix", "suffix": "" }, { "first": "Yining", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Alsop", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Jeretic", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2870--2880", "other_ids": { "DOI": [ "10.18653/v1/D19-1286" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investi- gating BERT's knowledge of language: Five anal- ysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2870-2880, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. Unpublished manuscript available on arXiv.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "XLNet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems 32, pages 5754-5764.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "SWAG: A large-scale adversarial dataset for grounded commonsense inference", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "93--104", "other_ids": { "DOI": [ "10.18653/v1/D18-1009" ] }, "num": null, "urls": [], "raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93- 104, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "HellaSwag: Can a machine really finish your sentence?", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4791--4800", "other_ids": { "DOI": [ "10.18653/v1/P19-1472" ] }, "num": null, "urls": [], "raw_text": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4791- 4800, Florence, Italy. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Multi-phase jiant experiment configuration used by Wang et al. (2019a): a BERT sentence encoder is trained with an intermediate task model during jiant's intermediate training phase, and fine-tuned with various target task models in jiant's target training phase.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Figure 2:", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "jiant pipeline stages using RoBERTa as the sentence encoder, ReCoRD and MNLI tasks as intermediate tasks, and MNLI and BoolQ as tasks for target training and evaluation. The diagram highlights that during target training and evaluation phases, copies are made of the sentence encoder model, and fine tuning and evaluation for each task are conducted on separate copies.1. A config or multiple configs defining an exper-iment are interpreted. Users can choose and configure models, tasks, and stages of training and evaluation.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Example jiant experiment config file.", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "bert-base-cased pretrain_tasks = \"swag,squad\" target_tasks = hellaswag \u2022 Train a probing classifier over a frozen BERT model, as in Tenney et al. (2019a): input_module = bert-base-cased target_tasks = edges-dpr transfer_paradigm = frozen \u2022 Compare performance of GloVe (Pennington et al., 2014) embeddings using a BiLSTM: input_module = glove sent_enc = rnn \u2022 Evaluate ALBERT (Lan et al., 2019) on the MNLI (Williams et al., 2018) task: input_module = albert-large-v2 target_task = mnli", "type_str": "figure", "uris": null } } } }