File size: 3,526 Bytes
86546c5
 
 
 
 
 
 
f5078c8
86546c5
 
 
 
 
c192a41
86546c5
 
 
 
dd11b7c
 
c192a41
dd11b7c
86546c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd11b7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- nyu-mll/glue
- sst2
---

## Paper

## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*

## Abstract

With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.

*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). 
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*

## SEAD-L-6_H-256_A-8-sst2

This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **sst2** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)


## All SEAD Checkpoints

Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)

## Intended uses & limitations

More information needed

### Training hyperparameters

Please take a look at the `training_args.bin` file

```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))

```
        

### Evaluation results

| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9266        | 1.3676       | 637.636                 | 20.475                | 0.2503    | 872          |


### Framework versions

- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6

If you use these models, please cite the following paper:

```
@article{article, 
      author={Mei, Moyan and Sroch, Rohit}, 
      title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, 
      volume={3}, 
      number={1}, 
      journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
      day={26},
      year={2022}, 
      month={Feb},
      url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
} 
```