ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-demos.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
107 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:47:13.904521Z"
},
"title": "TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing",
"authors": [
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": "zqyang5@iflytek.com"
},
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": "ymcui@iflytek.com"
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": "zpchen@iflytek.com"
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": "tliu@ir.hit.edu.cn"
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": "sjwang3@iflytek.com"
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin, Langfang",
"country": "China, China"
}
},
"email": "gphu@iflytek.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing. It works with different neural network models and supports various kinds of supervised learning tasks, such as text classification, reading comprehension, sequence labeling. TextBrewer provides a simple and uniform workflow that enables quick setting up of distillation experiments with highly flexible configurations. It offers a set of predefined distillation methods and can be extended with custom code. As a case study, we use TextBrewer to distill BERT on several typical NLP tasks. With simple configurations, we achieve results that are comparable with or even higher than the public distilled BERT models with similar numbers of parameters. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing. It works with different neural network models and supports various kinds of supervised learning tasks, such as text classification, reading comprehension, sequence labeling. TextBrewer provides a simple and uniform workflow that enables quick setting up of distillation experiments with highly flexible configurations. It offers a set of predefined distillation methods and can be extended with custom code. As a case study, we use TextBrewer to distill BERT on several typical NLP tasks. With simple configurations, we achieve results that are comparable with or even higher than the public distilled BERT models with similar numbers of parameters. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Large pre-trained language models, such as GPT (Radford, 2018) , BERT (Devlin et al., 2019) , RoBERTa and XLNet have achieved great success in many NLP tasks and greatly contributed to the progress of NLP research. However, one big issue of these models is the high demand for computing resources -they usually have hundreds of millions of parameters, and take several gigabytes of memory to train and inference -which makes it impractical to deploy them on mobile devices or online systems. From a research point of view, we are tempted to ask: is it necessary to have such a big model that contains hundreds of millions of parameters to achieve a high performance? Motivated by the above considerations, recently, some researchers in the NLP community have tried to design lite models (Lan et al., 2019) , or resort to knowledge 1 TextBrewer: http://textbrewer.hfl-rc.com distillation (KD) technique to compress large pretrained models to small models.",
"cite_spans": [
{
"start": 47,
"end": 62,
"text": "(Radford, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 70,
"end": 91,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 787,
"end": 805,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "KD is a technique of transferring knowledge from a teacher model to a student model, which is usually smaller than the teacher. The student model is trained to mimic the outputs of the teacher model. Before the birth of BERT, KD had been applied to several specific tasks like machine translation (Kim and Rush, 2016; Tan et al., 2019) in NLP. While the recent studies of distilling large pre-trained models focus on finding general distillation methods that work on various tasks and are receiving more and more attention (Sanh et al., 2019; Jiao et al., 2019; Sun et al., 2019a; Tang et al., 2019; Clark et al., 2019; .",
"cite_spans": [
{
"start": 297,
"end": 317,
"text": "(Kim and Rush, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 318,
"end": 335,
"text": "Tan et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 523,
"end": 542,
"text": "(Sanh et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 543,
"end": 561,
"text": "Jiao et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 562,
"end": 580,
"text": "Sun et al., 2019a;",
"ref_id": "BIBREF16"
},
{
"start": 581,
"end": 599,
"text": "Tang et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 600,
"end": 619,
"text": "Clark et al., 2019;",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though various distillation methods have been proposed, they usually share a common workflow: firstly, train a teacher model, then optimize the student model by minimizing some losses that are calculated between the outputs of the teacher and the student. Therefore it is desirable to have a reusable distillation workflow framework and treat different distillation strategies and tricks as plugins so that they could be easily and arbitrarily added to the framework. In this way, we could also achieve great flexibility in experimenting with different combinations of distillation strategies and comparing their effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce TextBrewer, a PyTorch-based distillation toolkit for NLP that aims to provide a unified distillation workflow, save the effort of setting up experiments and help users to distill more effective models. TextBrewer provides simple-to-use APIs, a collection of distillation methods, and highly customizable configurations. It has also been proved able to distill BERT models efficiently and reproduce the state-of-theart results on typical NLP tasks. The main features of TextBrewer are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Versatility in tasks and models. It works with a wide range of models, from the RNNbased model to the transformer-based model, and works on typical natural language understanding tasks. Its usability in tasks like text classification, reading comprehension, and sequence labeling has been fully tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Flexibility in configurations. The distillation process is configured by configuration objects, which can be initialized from JSON files and contain many tunable hyperparameters. Users can extend the configurations with new custom losses, schedulers, etc., if the presets do not meet their requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Including various distillation methods and strategies. KD has been studied extensively in computer vision (CV) and has achieved great success. It would be worthwhile to introduce these studies to the NLP community as some of the methods in these studies could also be applied to texts. TextBrewer includes a set of methods from both CV and NLP, such as flow of solution procedure (FSP) matrix loss (Yim et al., 2017) , neuron selectivity transfer (NST) (Huang and Wang, 2017) , probability shift and dynamic temperature (Wen et al., 2019) , attention matrix loss, multi-task distillation . In our experiments, we will show the effectiveness of applying methods from CV on NLP tasks.",
"cite_spans": [
{
"start": 400,
"end": 418,
"text": "(Yim et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 455,
"end": 477,
"text": "(Huang and Wang, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 522,
"end": 540,
"text": "(Wen et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Being non-intrusive and simple to use. Nonintrusive means there is no need to modify the existing code that defines the models. Users can re-use the most parts of their existing training scripts, such as model definition and initialization, data preprocessing and task evaluation. Only some preparatory work (see Section 3.3) are additionally required to use TextBrewer to perform the distillation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TextBrewer also provides some useful utilities such as model size analysis and data augmentation to help model design and distillation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently some distilled BERT models have been released, such as DistilBERT (Sanh et al., 2019) , TinyBERT (Jiao et al., 2019) , and ERNIE Slim 2 . DistilBERT performs distillation on the pretraining task, i.e., masked language modeling.",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 106,
"end": 125,
"text": "(Jiao et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "2 https://github.com/PaddlePaddle/ERNIE TinyBERT performs transformer distillation at both the pre-training and task-specific learning stages. ERNIE Slim distills ERNIE (Sun et al., 2019b ,c)on a sentiment classification task. Their distillation code is publicly available, and users can replicate their experiments easily. However, it is laborious and error-prone to change the distillation method or adapt the distillation code for some other models and tasks, since the code is not written for general distillation purposes.",
"cite_spans": [
{
"start": 169,
"end": 187,
"text": "(Sun et al., 2019b",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There also exist some libraries for general model compression. Distiller (Zmora et al., 2018) and PaddleSlim 3 are two versatile libraries supporting pruning, quantization and knowledge distillation. They focus on models and tasks in computer vision. In comparison, TextBrewer is more focused on knowledge distillation on NLP tasks, more flexible, and offers more functionalities. Based on PyTorch, It provides simple APIs and rich customization for fast and clean implementations of experiments.",
"cite_spans": [
{
"start": 73,
"end": 93,
"text": "(Zmora et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Architecture and Design Figure 1 shows an overview of the main functionalities and architecture of TextBrewer. To support different models and different tasks and meanwhile stay flexible and extensible, TextBrewer provides distillers to conduct the actual experiments and configuration classes to configure the behaviors of the distillers.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Distillers are the cores of TextBrewer. They automatically train and save models and support custom evaluation functions. Five distillers have been implemented: BasicDistiller is used for single-task single-teacher distillation; GeneralDistiller in addition supports more advanced intermediate loss functions; MultiTeacherDistiller distills an ensemble of teacher models into a single student model; MultiTaskDistiller distills multiple teacher models of different tasks into a single multi-task student model (Clark et al., 2019; . We also have implemented BasicTrainer for training teachers on labeled data to unify the workflows of supervised learning and distillation. All the distillers share the same interface and usage. They can be replaced by each other easily. ",
"cite_spans": [
{
"start": 510,
"end": 530,
"text": "(Clark et al., 2019;",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distillers",
"sec_num": "3.1"
},
{
"text": "The general training settings and the distillation method settings of a distiller are specified by two configurations: TrainingConfig and DistillationConfig.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Configurations and Presets",
"sec_num": "3.2"
},
{
"text": "TrainingConfig defines the settings that are general to deep learning experiments, including the directory where logs and student model are stored (log dir, output dir), the device to use (device), the frequency of storing and evaluating student model (ckpt frequencey), etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Configurations and Presets",
"sec_num": "3.2"
},
{
"text": "DistillationConfig defines the settings that are pertinent to distillation, where various distillation methods could be configured or enabled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Configurations and Presets",
"sec_num": "3.2"
},
{
"text": "It includes the type of KD loss (kd loss type), the temperature and weight of KD loss (temperature and kd loss weight), the weight of hard-label loss (hard label weight), probability shift switch, schedulers and intermediate losses, etc. Intermediate losses are used for computing the losses between the intermediate states of teacher and student, and they could be freely combined and added to the distillers. Schedulers are used to adjust loss weight or temperature dynamically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Configurations and Presets",
"sec_num": "3.2"
},
{
"text": "The available values of configuration options such as loss functions and schedulers are defined as dictionaries in presets. For example, the loss function dictionary includes hidden state loss, cosine similarity loss, FSP loss, NST loss, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Configurations and Presets",
"sec_num": "3.2"
},
{
"text": "All the configurations can be initialized from JSON files. In Figure 3 we show an example of DistillationConfig for distilling BERT BASE , to a 4-layer transformers. See Section 4 for more details. ",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Configurations and Presets",
"sec_num": "3.2"
},
{
"text": "Before distilling a teacher model using TextBrewer, some preparatory works have to be done:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "1. Train a teacher model on a labeled dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "Users usually train the teacher model with their own training scripts. TextBrewer also provides BasicTrainer for supervised training on a labeled dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "2. Define and initialize the student model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "3. Build a dataloader of the dataset for distillation and initialize the optimizer and learning rate scheduler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "The above steps are usually common to all deep learning experiments. To perform distillation, take the following additional steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "1. Initialize training and distillation configurations, and construct a distiller.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "2. Define adaptors and a callback function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "3. Call the train method of the distiller.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "A code snippet that shows the minimal workflow is presented in Figure 2 . The concepts of callback and adaptor will be explained below.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Workflow",
"sec_num": "3.3"
},
{
"text": "An example of distillation configuration. This configuration is used to distill a 12-layer BERT BASE to a 4-layer T4-tiny.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3:",
"sec_num": null
},
{
"text": "To monitor the performance of the student model during training, people usually evaluate the student model on a development set at some checkpoints besides logging the loss curve. For example, in the early stopping strategy, users choose the best model weights checkpoint based on the performance of the student model on the development set at the end of each epoch. TextBrewer supports such functionality by providing the callback function argument in the train method, as shown in line 24 of Figure 2 . The callback function takes two arguments: the student model and the current training step. At each checkpoint step (determined by num train epochs and ckpt frequencey), the distiller saves the student model and then calls the callback function.",
"cite_spans": [],
"ref_spans": [
{
"start": 494,
"end": 502,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Callback Function",
"sec_num": "3.3.1"
},
{
"text": "Since it is impractical to implement evaluation metrics and evaluation procedures for all NLP tasks, we encourage users to implement their own evaluation functions as the callbacks for the best practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Callback Function",
"sec_num": "3.3.1"
},
{
"text": "The distiller is model-agnostic. It needs a translator to translate the model outputs into meaningful data. Adaptor plays the role of translator. An Adaptor is an interface and responsible for explaining the inputs and outputs of the teacher and student for the distiller.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor",
"sec_num": "3.3.2"
},
{
"text": "Adaptor takes two arguments: the model inputs and the model outputs. It is expected to return a dictionary with some specific keys. Each key explains the meaning of the corresponding value, as shown in Figure 1 (b) . For example, logits is the logits of final outputs, hidden is intermediate hidden states, attention is the attention matrices, inputs mask is used to mask padding positions. The distiller only takes necessary elements from the outputs of adaptors according to its distillation configurations. A minimal adaptor only needs to explain logits, as shown in lines 11-14 of Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 214,
"text": "Figure 1 (b)",
"ref_id": "FIGREF0"
},
{
"start": 585,
"end": 593,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Adaptor",
"sec_num": "3.3.2"
},
{
"text": "TextBrewer also works with users' custom modules. New loss functions and schedulers can be easily added to the toolkit. For example, to use a custom loss function, one first implements the loss function with a compatible interface, then adds it to the loss function dictionary in the presets with a custom name, so that the new loss function becomes available as a new option value of the configuration and can be recognized by distillers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extensibility",
"sec_num": "3.4"
},
{
"text": "In this section, we conduct several experiments to show TextBrewer's ability to distill large pretrained models on different NLP tasks and achieve results are comparable with or even higher than the public distilled BERT models with similar numbers of parameters. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Datasets and tasks. We conduct experiments on both English and Chinese datasets. For English datasets, We use MNLI for text classification task, SQuAD1.1 (Rajpurkar et al., 2016) for span-extraction machine reading comprehension (MRC) task and CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) for named entity recognition (NER) task. For Chinese datasets, we use the Chinese part of XNLI (Conneau et al., 2018) , LCQMC , CMRC 2018 (Cui et al., 2019b) and DRCD (Shao et al., 2018) . XNLI is the multilingual version of MNLI. LCQMC is a large-scale Chinese question matching corpus. CMRC 2018 and DRCD are two span-extraction machine reading comprehension datasets similar to SQuAD. The statistics of the datasets are listed in Table 1 . Models. All the teachers are BERT BASE -based models. For English tasks, teachers are initialized with the weights released by Google 5 and converted into PyTorch format via Transformers 6 . For Chinese tasks, teacher is initialized with the pre-trained RoBERTa-wwm-ext 7 (Cui et al., 2019a) . We test the performance of the following student models:",
"cite_spans": [
{
"start": 154,
"end": 178,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 255,
"end": 292,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 388,
"end": 410,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 431,
"end": 450,
"text": "(Cui et al., 2019b)",
"ref_id": "BIBREF3"
},
{
"start": 460,
"end": 479,
"text": "(Shao et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 1008,
"end": 1027,
"text": "(Cui et al., 2019a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 726,
"end": 733,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 T6 and T3 are BERT BASE with fewer layers of transformers. Especially, T6 has the same structure as DistilBERT (Sanh et al., 2019) .",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 T3-small is a 3-layer BERT with half BERTbase's hidden size and feed-forward size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 T4-tiny is the same as TinyBERT, a 4-layer model with an even smaller hidden size and feedforward size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 BiGRU is a single-layer bidirectional GRU. Its word embeddings are taken from BERT BASE .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "T3-small and T4-tiny are initialized randomly. The model structures of the teacher and students are summarized in Table 3 . Training settings. To keep experiments simple, we directly distill the teacher model that has been trained on the task, while we do not perform task-irrelevant language modeling distillation in advance. The number of epochs ranges from 30 to 60, and the learning rate of students is 1e-4 for all distillation experiments. Distillation settings. Temperature is set to 8 for all experiments. We add intermediate losses uniformly distributed among all the layers between teacher and student (except BiGRU). The loss functions we choose are hidden mse loss which computes the mean square loss between two hidden states, and NST loss which is an effective method in CV. In Figure 3 we show an example of distillation configuration for distilling BERT BASE to a T4-tiny. Since their hidden sizes are different, we use proj option to add linear layers to match the dimensions. The linear layers will be trained together with the student automatically. We experiment with two kinds of distillers: GeneralDistiller and MultiTeacherDistiller .",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 792,
"end": 800,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We list the public results (DistilBERT and Tiny-BERT) and our distillation results obtained by GeneralDistiller in Table 2 . We have the following observations.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on English Datasets",
"sec_num": "4.2"
},
{
"text": "First, teachers can be distilled to T6 models with minor losses in performance. All the T6 models achieve 99% performance of the teachers, higher than the DistilBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on English Datasets",
"sec_num": "4.2"
},
{
"text": "Second, T4-tiny outperforms TinyBERT though they share the same structure. This is attributed to the NST losses in the distillation configuration. This result proves the effectiveness of applying KD method developed in CV on NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on English Datasets",
"sec_num": "4.2"
},
{
"text": "Third, although T4-tiny has less parameters than T3-small, T4-tiny outperforms T3-small in most T6 6 768 3072 65M 60% T3 3 768 3072 44M 41% T3-small 3 384 1536 17M 16% T4-tiny 4 312 1200 14M 13% BiGRU 1 768 -31M 29% cases. It may be a hint that narrow-and-deep models are better than wide-and-shallow models. Finally, data augmentation (DA) is critical. For the experiments in the last line in Table 2 , we use additional datasets during distillation: a subset of NewsQA (Trischler et al., 2017) training set is used in SQuAD; passages from the HotpotQA (Yang et al., 2018) training set is used in CoNLL-2003. The augmentation datasets significantly improve the performance, especially when the size of the training set is small, like CoNLL-2003. We next show the effectiveness of MultiTeacherDistiller, which distills an ensemble of teachers to a single student model. For each task, we train three BERT BASE teacher models with different seeds. The student is also a BERT BASE model. The temperature is set to 8, and intermediate losses are not used. As Table 4 shows, for each task, the student achieves the best performance, even higher than the ensemble result.",
"cite_spans": [
{
"start": 499,
"end": 523,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 582,
"end": 601,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 763,
"end": 774,
"text": "CoNLL-2003.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 96,
"end": 243,
"text": "T6 6 768 3072 65M 60% T3 3 768 3072 44M 41% T3-small 3 384 1536 17M 16% T4-tiny 4 312 1200 14M 13% BiGRU 1 768 -31M 29%",
"ref_id": "TABREF2"
},
{
"start": 422,
"end": 429,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1084,
"end": 1091,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results on English Datasets",
"sec_num": "4.2"
},
{
"text": "The results on Chinese datasets are presented in Table 5 . We notice that T4-tiny still outperforms T3-small on all tasks, which is consistent with their performance on English tasks. In the experiments with DA, CMRC 2018 and DRCD take each other's dataset as data augmentation. We observe that since CMRC 2018 has a relatively small training set, DA has a much more significant effect.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results on Chinese Datasets",
"sec_num": "5"
},
{
"text": "In this paper, we present TextBrewer, a flexible PyTorch-based distillation toolkit for NLP research and applications. TextBrewer provides rich customization options for users to compare different distillation methods and build their strategies. We have conducted a series of experiments. The results show that the distilled models can achieve state-of-the-art results with simple settings. TextBrewer also has its limitations. For example, its usability in generation tasks such as machine translation has not been tested. We will keep adding more examples and tests to expand TextBrewer's scope of application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Apart from the distillation strategies, the model structure also affects the performance. In the future, we aim to integrate neural architecture search into the toolkit to automate the searching for model structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://github.com/PaddlePaddle/PaddleSlim",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More results are presented in the online documentation: https://textbrewer.readthedocs.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/bert 6 https://github.com/huggingface/transformers 7 https://github.com/ymcui/Chinese-BERT-wwm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank all anonymous reviewers for their valuable comments on our work. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011, and 61772153.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BAM! born-again multi-task networks for natural language understanding",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5931--5937",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1595"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Urvashi Khandel- wal, Christopher D. Manning, and Quoc V. Le. 2019. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5931-5937, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "XNLI: Evaluating cross-lingual sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2475--2485",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1269"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pre-training with whole word masking for chinese",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019a. Pre-training with whole word masking for chinese BERT. CoRR, abs/1906.08101.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A span-extraction dataset for Chinese machine reading comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wentao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5886--5891",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1600"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guop- ing Hu. 2019b. A span-extraction dataset for Chi- nese machine reading comprehension. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5886-5891, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Like what you like: Knowledge distill via neuron selectivity transfer",
"authors": [
{
"first": "Zehao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Naiyan",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zehao Huang and Naiyan Wang. 2017. Like what you like: Knowledge distill via neuron selectivity trans- fer. CoRR, abs/1707.01219.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Tinybert: Distilling BERT for natural language understanding",
"authors": [
{
"first": "Xiaoqi",
"middle": [],
"last": "Jiao",
"suffix": ""
},
{
"first": "Yichun",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling BERT for natural lan- guage understanding. CoRR, abs/1909.10351.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Sequencelevel knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ALBERT: A lite BERT for selfsupervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. 2019. ALBERT: A lite BERT for self- supervised learning of language representations. CoRR, abs/1909.11942.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving multi-task deep neural networks via knowledge distillation for natural language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for natural lan- guage understanding. CoRR, abs/1904.09482.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "LCQMC:a large-scale Chinese question matching corpus",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Huajun",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dongfang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1952--1962",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. LCQMC:a large-scale Chinese question matching corpus. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1952-1962, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford. 2018. Improving language understand- ing by generative pre-training.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "DRCD: a chinese machine reading comprehension dataset",
"authors": [
{
"first": "Chih-Chieh",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Trois",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuting",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yiying",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. DRCD: a chinese machine reading comprehension dataset. CoRR, abs/1806.00920.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Patient knowledge distillation for BERT model compression",
"authors": [
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4323--4332",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1441"
]
},
"num": null,
"urls": [],
"raw_text": "Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient knowledge distillation for BERT model com- pression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4323-4332, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ERNIE: enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu-Kun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE: en- hanced representation through knowledge integra- tion. CoRR, abs/1904.09223.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "ERNIE 2.0: A continual pre-training framework for language understanding",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu-Kun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019c. ERNIE 2.0: A continual pre-training framework for language understanding. CoRR, abs/1907.12412.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multilingual neural machine translation with knowledge distillation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Natural language generation for effective knowledge distillation",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP",
"volume": "",
"issue": "",
"pages": "202--208",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6122"
]
},
"num": null,
"urls": [],
"raw_text": "Raphael Tang, Yao Lu, and Jimmy Lin. 2019. Natu- ral language generation for effective knowledge dis- tillation. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 202-208, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "NewsQA: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2623"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In 7th International Conference on Learning Representa- tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Preparing lessons: Improve knowledge distillation with better supervision",
"authors": [
{
"first": "Tiancheng",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Shenqi",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Xueming",
"middle": [],
"last": "Qian",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiancheng Wen, Shenqi Lai, and Xueming Qian. 2019. Preparing lessons: Improve knowledge distillation with better supervision. CoRR, abs/1911.07471.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1259"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning",
"authors": [
{
"first": "Junho",
"middle": [],
"last": "Yim",
"suffix": ""
},
{
"first": "Donggyu",
"middle": [],
"last": "Joo",
"suffix": ""
},
{
"first": "Ji-Hoon",
"middle": [],
"last": "Bae",
"suffix": ""
},
{
"first": "Junmo",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7130--7138",
"other_ids": {
"DOI": [
"10.1109/CVPR.2017.754"
]
},
"num": null,
"urls": [],
"raw_text": "Junho Yim, Donggyu Joo, Ji-Hoon Bae, and Junmo Kim. 2017. A gift from knowledge distillation: Fast optimization, network minimization and trans- fer learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Hon- olulu, HI, USA, July 21-26, 2017, pages 7130-7138.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Extreme language model compression with optimal subwords and shared projections",
"authors": [
{
"first": "Sanqiang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Denny",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. 2019. Extreme language model compres- sion with optimal subwords and shared projections. CoRR, abs/1909.11687.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bar Elharar, and Gal Novik",
"authors": [
{
"first": "Neta",
"middle": [],
"last": "Zmora",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Jacob",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Zlotnik",
"suffix": ""
}
],
"year": 2018,
"venue": "Neural network distiller",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1297430"
]
},
"num": null,
"urls": [],
"raw_text": "Neta Zmora, Guy Jacob, Lev Zlotnik, Bar Elharar, and Gal Novik. 2018. Neural network distiller.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) An overview of the main functionalities of TextBrewer. (b) A sketch that shows the function of adaptors inside a distiller.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "A code snippet that demonstrates the minimal TextBrewer workflow.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF2": {
"num": null,
"text": "A summary of the datasets used in experiments. The size of CoNLL-2003 is measured in number of entities.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"num": null,
"text": "Performance of BERT BASE (teacher) and various students on the development sets of MNLI and SQuAD, and the test set of CoNLL-2003. m and mm under MNLI denote the accuracies on matched and mismatched sections respectively.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF6": {
"num": null,
"text": "Model sizes of teacher and students. The number of parameters includes embeddings but does not include output layers.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td>MNLI m mm EM SQuAD F1</td><td>CoNLL-2003 F1</td></tr><tr><td colspan=\"2\">Teacher 1 83.6 84.0 81.1 88.6</td><td>91.2</td></tr><tr><td colspan=\"2\">Teacher 2 83.6 84.2 81.2 88.5</td><td>90.8</td></tr><tr><td colspan=\"2\">Teacher 3 83.7 83.8 81.2 88.7</td><td>91.3</td></tr><tr><td colspan=\"2\">Ensemble 84.3 84.7 82.3 89.4</td><td>91.5</td></tr><tr><td>Student</td><td>84.8 85.3 83.5 90.0</td><td>91.6</td></tr></table>"
},
"TABREF7": {
"num": null,
"text": "Results of multi-teacher distillation. All the models are BERT BASE . Different teachers are trained with different random seeds. For each task, the ensemble is the average of three teachers' results.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF9": {
"num": null,
"text": "Development set results for the teacher and various students on Chinese tasks.",
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}