ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-demos.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:37:30.749938Z"
},
"title": "The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": "jianshuj@microsoft.com"
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Xueyun",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": "xuzhu@microsoft.com"
},
{
"first": "Emmanuel",
"middle": [],
"last": "Awa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Guihong",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present MT-DNN 1 , an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models. Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks, using a variety of objectives (classification, regression, structured prediction) and text encoders (e.g., RNNs, BERT, RoBERTa, UniLM). A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm. To enable efficient production deployment, MT-DNN supports multitask knowledge distillation, which can substantially compress a deep neural model without significant performance drop. We demonstrate the effectiveness of MT-DNN on a wide range of NLU applications across general and biomedical domains. The software and pretrained models will be publicly available at https://github.com/namisan/mt-dnn. * Equal Contribution. 1 The complete name of our toolkit is M T 2-DNN (The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding), but we use MT-DNN for sake of simplicity.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present MT-DNN 1 , an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models. Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks, using a variety of objectives (classification, regression, structured prediction) and text encoders (e.g., RNNs, BERT, RoBERTa, UniLM). A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm. To enable efficient production deployment, MT-DNN supports multitask knowledge distillation, which can substantially compress a deep neural model without significant performance drop. We demonstrate the effectiveness of MT-DNN on a wide range of NLU applications across general and biomedical domains. The software and pretrained models will be publicly available at https://github.com/namisan/mt-dnn. * Equal Contribution. 1 The complete name of our toolkit is M T 2-DNN (The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding), but we use MT-DNN for sake of simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "NLP model development has observed a paradigm shift in recent years, due to the success in using pretrained language models to improve a wide range of NLP tasks Devlin et al., 2019) . Unlike the traditional pipeline approach that conducts annotation in stages using primarily supervised learning, the new paradigm features a universal pretraining stage that trains a large neural language model via self-supervision on a large unlabeled text corpus, followed by a fine-tuning step that starts from the pretrained contextual representations and conducts supervised learning for individual tasks. The pretrained language models can effectively model textual variations and distributional similarity. Therefore, they can make subsequent task-specific training more sample efficient and often significantly boost performance in downstream tasks. However, these models are quite large and pose significant challenges to production deployment that has stringent memory or speed requirements. As a result, knowledge distillation has become another key feature in this new learning paradigm. An effective distillation step can often substantially compress a large model for efficient deployment (Clark et al., 2019; Tang et al., 2019; Liu et al., 2019a) .",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1187,
"end": 1207,
"text": "(Clark et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 1208,
"end": 1226,
"text": "Tang et al., 2019;",
"ref_id": "BIBREF43"
},
{
"start": 1227,
"end": 1245,
"text": "Liu et al., 2019a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the NLP community, there are several well designed frameworks for research and commercial purposes, including toolkits for providing conventional layered linguistic annotations (Manning et al., 2014) , platforms for developing novel neural models and systems for neural machine translation (Ott et al., 2019) . However, it is hard to find an existing tool that supports all features in the new paradigm and can be easily customized for new tasks. For example, provides a number of popular Transformerbased (Vaswani et al., 2017) text encoders in a nice unified interface, but does not offer multitask learning or adversarial training, state-of-the-art techniques that have been shown to significantly improve performance. Additionally, most public frameworks do not offer knowledge distillation. A notable exception is DistillBERT , but it provides a standalone compressed model and does not support task-specific model compression that can further improve performance.",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 293,
"end": 311,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 509,
"end": 531,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce MT-DNN, a comprehensive and easily-configurable open-source toolkit for building robust and transferable natural language understanding models. MT-DNN is built upon PyTorch (Paszke et al., 2019) and the popular Transformer-based text-encoder interface . It supports a large inventory of pretrained models, neural architectures, and NLU tasks, and can be easily customized for new tasks.",
"cite_spans": [
{
"start": 186,
"end": 207,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A key distinct feature for MT-DNN is that it provides out-of-box adversarial training, multi-task learning, and knowledge distillation. Users can train a set of related tasks jointly to amplify each other. They can also invoke adversarial training (Miyato et al., 2018; Jiang et al., 2019; Liu et al., 2020) , which helps improve model robustness and generalizability. For production deployment where large model size becomes a practical obstacle, users can use MT-DNN to compress the original models into substantially smaller ones, even using a completely different architecture (e.g., compressed BERT or other Transformer-based text encoders into LSTMs (Hochreiter and Schmidhuber, 1997) ). The distillation step can similarly leverage multitask learning and adversarial training. Users can also conduct pretraining from scratch using the masked language model objective in MT-DNN. Moreover, in the fine-tuning step, users can incorporate this as an auxiliary task on the training text, which has been shown to improve performance. MT-DNN provides a comprehensive list of stateof-the-art pre-trained NLU models, together with step-by-step tutorials for using such models in general and biomedical applications.",
"cite_spans": [
{
"start": 248,
"end": 269,
"text": "(Miyato et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 270,
"end": 289,
"text": "Jiang et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 290,
"end": 307,
"text": "Liu et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 656,
"end": 690,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "MT-DNN is designed for modularity, flexibility, and ease of use. These modules are built upon Py-Torch (Paszke et al., 2019) and Transformers , allowing the use of the SOTA pretrained models, e.g., BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019c) and UniLM (Dong et al., 2019) . The unique attribute of this package is a flexible interface for adversarial multi-task fine-tuning and knowledge distillation, so that researchers and developers can build large SOTA NLU models and then compress them to small ones for online deployment.The overall workflow and system architecture are shown in Figure 1 ",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 203,
"end": 224,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 235,
"end": 254,
"text": "(Liu et al., 2019c)",
"ref_id": "BIBREF27"
},
{
"start": 265,
"end": 284,
"text": "(Dong et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Design",
"sec_num": "2"
},
{
"text": "As shown in Figure 1 , starting from the neural language model pre-training, there are three different training configurations by following the directed arrows:",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Workflow",
"sec_num": "2.1"
},
{
"text": "\u2022 Single-task configuration: single-task fine-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "2.1"
},
{
"text": "Multi-task Fine-tuning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task Knowledge Distillation",
"sec_num": null
},
{
"text": "Pre-training",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Model",
"sec_num": null
},
{
"text": "Single-task Fine-tuning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Model",
"sec_num": null
},
{
"text": "Adversarial Training Figure 1 : The workflow of MT-DNN: train a neural language model on a large amount of unlabeled raw text to obtain general contextual representations; then finetune the learned contextual representation on downstream tasks, e.g. GLUE (Wang et al., 2018) ; lastly, distill this large model to a lighter one for online deployment. In the later two phrases, we can leverage powerful multi-task learning and adversarial training to further improve performance. tuning and single-task knowledge distillation;",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-task Knowledge Distillation",
"sec_num": null
},
{
"text": "\u2022 Multi-task configuration: multi-task finetuning and multi-task knowledge distillation;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-task Knowledge Distillation",
"sec_num": null
},
{
"text": "\u2022 Multi-stage configuration: multi-task finetuning, single-task fine tuning and single-task knowledge distillation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-task Knowledge Distillation",
"sec_num": null
},
{
"text": "Moreover, all configurations can be additionally equipped with the adversarial training. Each stage of the workflow is described in details as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-task Knowledge Distillation",
"sec_num": null
},
{
"text": "Neural Language Model Pre-Training Due to the great success of deep contextual representations, such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) , it is common practice of developing NLU models by first pre-training the underlying neural text representations (text encoders) through massive language modeling which results in superior text representations transferable across multiple NLP tasks. Because of this, there has been an increasing effort to develop better pre-trained text encoders by multiplying either the scale of data (Liu et al., 2019c) or the size of model (Raffel et al., 2019) . Similar to existing codebases (Devlin et al., 2019), MT-DNN supports the LM pretraining from scratch with multiple types of objectives, such as masked LM (Devlin et al., 2019) and Figure 2: Process of knowledge distillation for MTL. A set of tasks where there is task-specific labeled training data are picked. Then, for each task, an ensemble of different neural nets (teacher) is trained. The teacher is used to generate for each task-specific training sample a set of soft targets. Given the soft targets of the training datasets across multiple tasks, a single MT-DNN (student) shown in Figure 3 is trained using multi-task learning and back propagation, except that if task t has a teacher, the task-specific loss is the average of two objective functions, one for the correct targets and the other for the soft targets assigned by the teacher.",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 168,
"end": 189,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 578,
"end": 597,
"text": "(Liu et al., 2019c)",
"ref_id": "BIBREF27"
},
{
"start": 619,
"end": 640,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 797,
"end": 818,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1234,
"end": 1242,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-task Knowledge Distillation",
"sec_num": null
},
{
"text": "next sentence prediction (Devlin et al., 2019) . Moreover, users can leverage the LM pretraining, such as masked LM used by BERT, as an auxiliary task for fine-tuning under the multitask learning (MTL) framework Liu et al., 2019b) . Fine-tuning Once the text encoder is trained in the pre-training stage, an additional task-specific layer is usually added for fine-tuning based on the downstream task. Besides the existing typical single-task fine-tuning, MT-DNN facilitates a joint fine-tuning with a configurable list of related tasks in a MTL fashion. By encoding task-relatedness and sharing underlying text representations, MTL is a powerful training paradigm that promotes the model generalization ability and results in improved performance (Caruana, 1997; Liu et al., 2019b; Luong et al., 2015; Liu et al., 2015; Ruder, 2017; Collobert et al., 2011) . Additionally, a two-step fine-tuning stage is also supported to utilize datasets from related tasks, i.e. a single-task fine-tuning following a multi-task fine-tuning. It also supports two popular sampling strategies in MTL training: 1) sampling tasks uniformly (Caruana, 1997; Liu et al., 2015) ; 2) sampling tasks based on the size of the dataset (Liu et al., 2019b) . This makes it easy to explore various ways to feed training data to MTL training. Finally, to further improve the model robustness, MT-DNN also offers a recipe to apply adversarial training (Madry et al., 2017; Zhu et al., 2019; Jiang et al., 2019) in the fine-tuning stage. Knowledge Distillation Although contextual text representation models pre-trained with massive text data have led to remarkable progress in NLP, it is computationally prohibitive and inefficient to deploy such models with millions of parameters for real-world applications (e.g. BERT large model has 344 million parameters). Therefore, in order to expedite the NLU model learned in either a single-task or multi-task fashion for deployment, MT-DNN additionally supports the multitask knowledge distillation (Clark et al., 2019; Liu et al., 2019a; Tang et al., 2019; Balan et al., 2015; Ba and Caruana, 2014) , an extension of (Hinton et al., 2015) , to compress cumbersome models into lighter ones. The multi-task knowledge distillation process is illustrated in Figure 2 . Similar to the fine-tuning stage, adversarial training is available in the knowledge distillation stage.",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 212,
"end": 230,
"text": "Liu et al., 2019b)",
"ref_id": "BIBREF25"
},
{
"start": 748,
"end": 763,
"text": "(Caruana, 1997;",
"ref_id": "BIBREF5"
},
{
"start": 764,
"end": 782,
"text": "Liu et al., 2019b;",
"ref_id": "BIBREF25"
},
{
"start": 783,
"end": 802,
"text": "Luong et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 803,
"end": 820,
"text": "Liu et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 821,
"end": 833,
"text": "Ruder, 2017;",
"ref_id": "BIBREF39"
},
{
"start": 834,
"end": 857,
"text": "Collobert et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 1122,
"end": 1137,
"text": "(Caruana, 1997;",
"ref_id": "BIBREF5"
},
{
"start": 1138,
"end": 1155,
"text": "Liu et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 1209,
"end": 1228,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF25"
},
{
"start": 1421,
"end": 1441,
"text": "(Madry et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 1442,
"end": 1459,
"text": "Zhu et al., 2019;",
"ref_id": "BIBREF50"
},
{
"start": 1460,
"end": 1479,
"text": "Jiang et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 2013,
"end": 2033,
"text": "(Clark et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 2034,
"end": 2052,
"text": "Liu et al., 2019a;",
"ref_id": "BIBREF24"
},
{
"start": 2053,
"end": 2071,
"text": "Tang et al., 2019;",
"ref_id": "BIBREF43"
},
{
"start": 2072,
"end": 2091,
"text": "Balan et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 2092,
"end": 2113,
"text": "Ba and Caruana, 2014)",
"ref_id": "BIBREF0"
},
{
"start": 2132,
"end": 2153,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 2269,
"end": 2277,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-task Knowledge Distillation",
"sec_num": null
},
{
"text": "Lexicon Encoder (l 1 ): The input X = {x 1 , ..., x m } is a sequence of tokens of length m. The first token x 1 is always a specific token, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.2"
},
{
"text": "[CLS] for BERT Devlin et al. (2019) while <s> for RoBERTa Liu et al. (2019c) . If X is a pair of sentences (X 1 , X 2 ), we separate these sentences with special tokens, e.g.",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 58,
"end": 76,
"text": "Liu et al. (2019c)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.2"
},
{
"text": "[SEP] for BERT and [</s>] for RoBERTa. The lexicon encoder maps X into a sequence of input embedding vectors, Figure 3 : Overall System Architecture: The lower layers are shared across all tasks while the top layers are taskspecific. The input X (either a sentence or a set of sentences) is first represented as a sequence of embedding vectors, one for each word, in l 1 . Then the encoder, e.g a Transformer or recurrent neural network (LSTM) model, captures the contextual information for each word and generates the shared contextual embedding vectors in l 2 . Finally, for each task, additional task-specific layers generate task-specific representations, followed by operations necessary for classification, similarity scoring, or relevance ranking. In case of adversarial training, we perturb embeddings from the lexicon encoder and then add an extra loss term during the training. Note that for the inference phrase, it does not require perturbations.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.2"
},
{
"text": "one for each token, constructed by summing the corresponding word with positional, and optional segment embeddings. Encoder (l 2 ): We support a multi-layer bidirectional Transformer (Vaswani et al., 2017) or a LSTM (Hochreiter and Schmidhuber, 1997) encoder to map the input representation vectors (l 1 ) into a sequence of contextual embedding vectors C \u2208 R d\u00d7m . This is the shared representation across different tasks. Note that MT-DNN also allows developers to customize their own encoders. For example, one can design an encoder with few Transformer layers (e.g. 3 layers) to distill knowledge from the BERT large model (24 layers), so that they can deploy this small mode online to meet the latency restriction as shown in Figure 2 .",
"cite_spans": [
{
"start": 183,
"end": 205,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF45"
},
{
"start": 216,
"end": 250,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 731,
"end": 739,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.2"
},
{
"text": "Task-Specific Output Layers: We can incorporate arbitrary natural language tasks, each with its task-specific output layer. For example, we implement the output layers as a neural decoder for a neural ranker for relevance ranking, a logistic regression for text classification, and so on. A multistep reasoning decoder, SAN (Liu et al., 2018a,b) is also provided. Customers can choose from existing task-specific output layer or implement new one by themselves.",
"cite_spans": [
{
"start": 324,
"end": 345,
"text": "(Liu et al., 2018a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.2"
},
{
"text": "In this section, we present a comprehensive set of examples to illustrate how to customize MT-DNN for new tasks. We use popular benchmarks from general and biomedical domains, including GLUE (Wang et al., 2018) , SNLI (Bowman et al., 2015) , SciTail (Khot et al., 2018) , SQuAD (Rajpurkar et al., 2016), ANLI (Nie et al., 2019) , and biomedical named entity recognition (NER), relation extraction (RE) and question answering (QA) . To make the experiments reproducible, we make all the configuration files publicly available. We also provide a quick guide for customizing a new task in Jupyter notebooks.",
"cite_spans": [
{
"start": 191,
"end": 210,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 213,
"end": 239,
"text": "SNLI (Bowman et al., 2015)",
"ref_id": null
},
{
"start": 250,
"end": 269,
"text": "(Khot et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 309,
"end": 327,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application",
"sec_num": "3"
},
{
"text": "Understanding Benchmarks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Domain Natural Language",
"sec_num": "3.1"
},
{
"text": "\u2022 GLUE. The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding (NLU) tasks. As shown in Table 1 , it includes question answering (Rajpurkar et al., 2016) , linguistic acceptability (Warstadt et al., 2018) , sentiment analy- ",
"cite_spans": [
{
"start": 192,
"end": 216,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 244,
"end": 267,
"text": "(Warstadt et al., 2018)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "General Domain Natural Language",
"sec_num": "3.1"
},
{
"text": "Dev Test BERT LARGE (Nie et al., 2019) 49.3 44.2 RoBERTa LARGE (Nie et al., 2019) 53.7 49.7 RoBERTa-LARGE + AdvTrain 57.1 57.1 sis (Socher et al., 2013 ), text similarity (Cer et al., 2017) , paraphrase detection (Dolan and Brockett, 2005) , and natural language inference (NLI) Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009; Levesque et al., 2012; Williams et al., 2018) . The diversity of the tasks makes GLUE very suitable for evaluating the generalization and robustness of NLU models.",
"cite_spans": [
{
"start": 20,
"end": 38,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 63,
"end": 81,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 131,
"end": 151,
"text": "(Socher et al., 2013",
"ref_id": "BIBREF41"
},
{
"start": 171,
"end": 189,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 213,
"end": 239,
"text": "(Dolan and Brockett, 2005)",
"ref_id": "BIBREF11"
},
{
"start": 279,
"end": 301,
"text": "Bar-Haim et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 302,
"end": 327,
"text": "Giampiccolo et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 328,
"end": 352,
"text": "Bentivogli et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 353,
"end": 375,
"text": "Levesque et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 376,
"end": 398,
"text": "Williams et al., 2018)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "\u2022 SNLI. The Stanford Natural Language Inference (SNLI) dataset contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr30 corpus and hypothe-ses are manually annotated (Bowman et al., 2015) . This is the most widely used entailment dataset for NLI.",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "\u2022 SciTail This is a textual entailment dataset derived from a science question answering (SciQ) dataset (Khot et al., 2018) . In contrast to other entailment datasets mentioned previously, the hypotheses in SciTail are created from science questions while the corresponding answer candidates and premises come from relevant web sentences retrieved from a large corpus.",
"cite_spans": [
{
"start": 104,
"end": 123,
"text": "(Khot et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "\u2022 ANLI. The Adversarial Natural Language Inference (ANLI, Nie et al. (2019) ) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa.",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "Nie et al. (2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "\u2022 SQuAD. The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) contains about 23K passages and 100K questions. The passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowdsourcing. Following (Devlin et al., 2019) , table 2 compares different training algorithm: 1) BERT denotes a single task fine-tuning; 2) BERT + MTL indicates that it is trained jointly via MTL; at last 3), BERT + Ad-vTrain represents that a single task fine-tuning with adversarial training. It is obvious that the both MLT and adversarial training helps to obtain a better result. We further test our model on an adversarial natural language inference (ANLI) dataset (Nie et al., 2019) . Table 3 summarizes the results on ANLI. As Nie et al. (2019) , all the dataset of ANLI (Nie et al., 2019) , MNLI (Williams et al., 2018) , SNLI (Bowman et al., 2015) and FEVER (Thorne et al., 2018) are combined as training. RoBERTa-LARGE+AdvTrain obtains the best performance compared with all the strong baselines, demonstrating the advantage of adversarial training.",
"cite_spans": [
{
"start": 57,
"end": 81,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 261,
"end": 282,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 709,
"end": 727,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 773,
"end": 790,
"text": "Nie et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 817,
"end": 835,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 843,
"end": 866,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF48"
},
{
"start": 869,
"end": 895,
"text": "SNLI (Bowman et al., 2015)",
"ref_id": null
},
{
"start": 906,
"end": 927,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 730,
"end": 737,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Understating Benchmarks There has been rising interest in exploring natural language understanding tasks in high-value domains other than newswire and the Web. In our release, we provide MT-DNN customization for three representative biomedical natural language understanding tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Natural Language",
"sec_num": "3.2"
},
{
"text": "\u2022 Named entity recognition (NER): In biomedical natural language understanding, NER has received greater attention than other tasks and datasets are available for recognizing various biomedical entities such as disease, gene, drug (chemical).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Natural Language",
"sec_num": "3.2"
},
{
"text": "\u2022 Relation extraction (RE): Relation extraction is more closely related to end applications, but annotation effort is significantly higher compared to NER. Most existing RE tasks focus on binary relations within a short text span such as a sentence of an abstract. Examples include gene-disease or protein-chemical relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Natural Language",
"sec_num": "3.2"
},
{
"text": "\u2022 Question answering (QA): Inspired by interest in QA for the general domain, there has been some effort to create question-answering datasets in biomedicine. Annotation requires domain expertise, so it is significantly harder than in general domain, where it is to produce large-scale datasets by crowdsourcing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Natural Language",
"sec_num": "3.2"
},
{
"text": "The MT-DNN customization can work with standard or biomedicine-specific pretraining models such as BioBERT, and can be directly applied to biomedical benchmarks . We will go though a typical Natural Language Inference task, e.g. SNLI, which is one of the most popular benchmark, showing how to apply our toolkit to a new task. MT-DNN is driven by configuration and command line arguments. Firstly, the SNLI configuration is shown in Figure 4 . The configuration defines tasks, model architecture as well as loss functions. We briefly introduce these attributes as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 441,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Biomedical Natural Language",
"sec_num": "3.2"
},
{
"text": "1. data format is a required attribute and it denotes that each sample includes two sentences (premise and hypothesis). Please refer the tutorial and API for supported formats.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension",
"sec_num": "3.3"
},
{
"text": "2. task layer type specifies architecture of the task specific layer. The default is a \"linear layer\". 3. labels Users can list unique values of labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension",
"sec_num": "3.3"
},
{
"text": "The configuration helps to convert back and forth between text labels and numbers during training and evaluation. Without it, MT-DNN assumes the label of prediction are numbers. 4. metric meta is the evaluation metric used for validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension",
"sec_num": "3.3"
},
{
"text": "5. loss is the loss function for SNLI. It also supports other functions, e.g. MSE for regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension",
"sec_num": "3.3"
},
{
"text": "6. kd loss is the loss function in the knowledge distillation setting. 7. adv loss is the loss function in the adversarial setting. 8. n class denotes the number of categories for SNLI. 9. task type specifies whether it is a classification task or a regression task. Once the configuration is provided, one can train the customized model for the task, using any supported pre-trained models as starting point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension",
"sec_num": "3.3"
},
{
"text": "MT-DNN is also highly extensible, as shown in Figure 4 , loss and task layer type point to existing classes in code. Users can write customized classes and plug into MT-DNN. The customized classes could then be used via configuration.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extension",
"sec_num": "3.3"
},
{
"text": "Microsoft MT-DNN is an open-source natural language understanding toolkit which facilitates researchers and developers to build customized deep learning models. Its key features are: 1) support for robust and transferable learning using adversarial multi-task learning paradigm; 2) enable knowledge distillation under the multi-task learning setting which can be leveraged to derive lighter models for efficient online deployment. We will extend MT-DNN to support Natural Language Generation tasks, e.g. Question Generation, and incorporate more pre-trained encoders, e.g. T5 (Raffel et al., 2019) in future.",
"cite_spans": [
{
"start": 576,
"end": 597,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "We thank Liyuan Liu, Sha Li, Mehrad Moradshahi and other contributors to the package, and the anonymous reviewers for valuable discussions and comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Do deep nets really need to be deep?",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2654--2662",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654-2662.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bayesian dark knowledge",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Anoop Korattikara Balan",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"P"
],
"last": "Rathod",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3438--3446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Korattikara Balan, Vivek Rathod, Kevin P Mur- phy, and Max Welling. 2015. Bayesian dark knowl- edge. In Advances in Neural Information Process- ing Systems, pages 3438-3446.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The second PASCAL recognising textual entailment challenge",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second PASCAL recognising textual entailment challenge. In Pro- ceedings of the Second PASCAL Challenges Work- shop on Recognising Textual Entailment.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The fifth pascal recognizing textual entailment challenge",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC'09.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multitask learning. Machine learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.00055"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bam! born-again multi-task networks for natural language understanding",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.04829"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Urvashi Khandel- wal, Christopher D Manning, and Quoc V Le. 2019. Bam! born-again multi-task networks for natural language understanding. arXiv preprint arXiv:1907.04829.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine learning research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The pascal recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {
"DOI": [
"10.1007/11736790_9"
]
},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Proceedings of the First Inter- national Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual En- tailment, MLCW'05, pages 177-190, Berlin, Hei- delberg. Springer-Verlag.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unified language model pre-training for natural language understanding and generation",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hsiao-Wuen",
"middle": [],
"last": "Hon",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13042--13054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Advances in Neural Informa- tion Processing Systems, pages 13042-13054.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.07640"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language process- ing platform. arXiv preprint arXiv:1803.07640.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The third PASCAL recognizing textual entailment challenge",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recogniz- ing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02531"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Smart: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization",
"authors": [
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03437"
]
},
"num": null,
"urls": [],
"raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019. Smart: Robust and efficient fine-tuning for pre- trained natural language models through princi- pled regularized optimization. arXiv preprint arXiv:1911.03437.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SciTail: A textual entailment dataset from science question answering",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Biobert: pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.08746"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomed- ical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The winograd schema challenge",
"authors": [
{
"first": "Hector",
"middle": [],
"last": "Levesque",
"suffix": ""
},
{
"first": "Ernest",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Leora",
"middle": [],
"last": "Morgenstern",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adversarial training for large neural language models",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.08994"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Stochastic answer networks for natural language inference",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07888"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018a. Stochastic answer networks for natural language in- ference. arXiv preprint arXiv:1804.07888.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "912--921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 912-921.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving multi-task deep neural networks via knowledge distillation for natural language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09482"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multi-task deep neural networks for natural language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4487--4496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Stochastic answer networks for machine reading comprehension",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018b. Stochastic answer networks for ma- chine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers). Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multi-task sequence to sequence learning",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06114"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Towards deep learning models resistant to adversarial attacks",
"authors": [
{
"first": "Aleksander",
"middle": [],
"last": "Madry",
"suffix": ""
},
{
"first": "Aleksandar",
"middle": [],
"last": "Makelov",
"suffix": ""
},
{
"first": "Ludwig",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Tsipras",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Vladu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.06083"
]
},
"num": null,
"urls": [],
"raw_text": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversar- ial attacks. arXiv preprint arXiv:1706.06083.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Virtual adversarial training: a regularization method for supervised and semisupervised learning",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "Masanori",
"middle": [],
"last": "Shin-Ichi Maeda",
"suffix": ""
},
{
"first": "Shin",
"middle": [],
"last": "Koyama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ishii",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "41",
"issue": "",
"pages": "1979--1993",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pat- tern analysis and machine intelligence, 41(8):1979- 1993.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Adversarial nli: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.14599"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Ad- versarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.01038"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in Neural Information Processing Systems, pages 8024-8035.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "An overview of multi-task learning in",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "deep neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.05098"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Ernie 2.0: A continual pre-training framework for language understanding",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.12412"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language un- derstanding. arXiv preprint arXiv:1907.12412.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Distilling taskspecific knowledge from bert into simple neural networks",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Linqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.12136"
]
},
"num": null,
"urls": [],
"raw_text": "Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task- specific knowledge from bert into simple neural net- works. arXiv preprint arXiv:1903.12136.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Christos Christodoulopoulos, and Arpit Mittal",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "Fever: a large-scale dataset for fact extraction and verification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.05355"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07461"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Neural network acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.12471"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Freelb: Enhanced adversarial training for language understanding",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Goldstein",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11764"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019. Freelb: En- hanced adversarial training for language understand- ing. arXiv preprint arXiv:1909.11764.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "The configuration of SNLI.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"2\">: Summary of the four benchmarks: GLUE,</td></tr><tr><td colspan=\"2\">SNLI, SciTail and ANLI.</td></tr><tr><td>Model</td><td>MNLI RTE QNLI SST MRPC</td></tr><tr><td/><td>Acc Acc Acc Acc F1</td></tr><tr><td>BERT</td><td>84.5 63.5 91.1 92.9 89.0</td></tr><tr><td>BERT + MTL</td><td>85.3 79.1 91.5 93.6 89.2</td></tr><tr><td colspan=\"2\">BERT + AdvTrain 85.6 71.2 91.6 93.0 91.3</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"text": "Comparison among single task, multi-Task and adversarial training on MNLI, RTE, QNLI, SST and MPRC in GLUE.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Results in terms of accuracy on the ANLI.",
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}