rohitsroch
commited on
Commit
•
b4453a3
1
Parent(s):
68cd6d3
Push SEAD-L-6_H-384_A-12-qqp model weights
Browse files- README.md +80 -0
- config.json +35 -0
- eval_results.json +9 -0
- flax_model.msgpack +3 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tf_model.h5 +3 -0
- tokenizer_config.json +1 -0
- training_args.bin +3 -0
- vocab.txt +0 -0
README.md
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: apache-2.0
|
5 |
+
tags:
|
6 |
+
- SEAD
|
7 |
+
datasets:
|
8 |
+
- glue
|
9 |
+
- qqp
|
10 |
+
---
|
11 |
+
|
12 |
+
## Paper
|
13 |
+
|
14 |
+
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
|
15 |
+
Aurthors: *Moyan Mei*, *Rohit Sroch*
|
16 |
+
|
17 |
+
## Abstract
|
18 |
+
|
19 |
+
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
|
20 |
+
|
21 |
+
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
|
22 |
+
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
|
23 |
+
|
24 |
+
## SEAD-L-6_H-384_A-12-qqp
|
25 |
+
|
26 |
+
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qqp** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
|
27 |
+
|
28 |
+
|
29 |
+
## All SEAD Checkpoints
|
30 |
+
|
31 |
+
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
|
32 |
+
|
33 |
+
## Intended uses & limitations
|
34 |
+
|
35 |
+
More information needed
|
36 |
+
|
37 |
+
### Training hyperparameters
|
38 |
+
|
39 |
+
Please take a look at the `training_args.bin` file
|
40 |
+
|
41 |
+
```python
|
42 |
+
$ import torch
|
43 |
+
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
|
44 |
+
|
45 |
+
```
|
46 |
+
|
47 |
+
|
48 |
+
### Evaluation results
|
49 |
+
|
50 |
+
| eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|
51 |
+
|:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
|
52 |
+
| 0.9126 | 0.8822 | 23.0122 | 1756.896 | 54.927 | 0.3389 | 40430 |
|
53 |
+
|
54 |
+
|
55 |
+
### Framework versions
|
56 |
+
|
57 |
+
- Transformers >=4.8.0
|
58 |
+
- Pytorch >=1.6.0
|
59 |
+
- TensorFlow >=2.5.0
|
60 |
+
- Flax >=0.3.5
|
61 |
+
- Datasets >=1.10.2
|
62 |
+
- Tokenizers >=0.11.6
|
63 |
+
|
64 |
+
If you use these models, please cite the following paper:
|
65 |
+
|
66 |
+
|
67 |
+
```
|
68 |
+
@article{article,
|
69 |
+
author={Mei, Moyan and Sroch, Rohit},
|
70 |
+
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
|
71 |
+
volume={3},
|
72 |
+
number={1},
|
73 |
+
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
|
74 |
+
day={26},
|
75 |
+
year={2022},
|
76 |
+
month={Feb},
|
77 |
+
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
|
78 |
+
}
|
79 |
+
```
|
80 |
+
|
config.json
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "../artifacts/best_models/qqp/L-6_H-384_A-12/student-ckpt",
|
3 |
+
"architectures": [
|
4 |
+
"BertForSequenceClassification"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"classifier_dropout": null,
|
8 |
+
"finetuning_task": "qqp",
|
9 |
+
"gradient_checkpointing": false,
|
10 |
+
"hidden_act": "gelu",
|
11 |
+
"hidden_dropout_prob": 0.1,
|
12 |
+
"hidden_size": 384,
|
13 |
+
"id2label": {
|
14 |
+
"0": 0,
|
15 |
+
"1": 1
|
16 |
+
},
|
17 |
+
"initializer_range": 0.02,
|
18 |
+
"intermediate_size": 1536,
|
19 |
+
"label2id": {
|
20 |
+
"0": 0,
|
21 |
+
"1": 1
|
22 |
+
},
|
23 |
+
"layer_norm_eps": 1e-12,
|
24 |
+
"max_position_embeddings": 512,
|
25 |
+
"model_type": "bert",
|
26 |
+
"num_attention_heads": 12,
|
27 |
+
"num_hidden_layers": 6,
|
28 |
+
"pad_token_id": 0,
|
29 |
+
"position_embedding_type": "absolute",
|
30 |
+
"problem_type": "single_label_classification",
|
31 |
+
"transformers_version": "4.18.0",
|
32 |
+
"type_vocab_size": 2,
|
33 |
+
"use_cache": true,
|
34 |
+
"vocab_size": 30522
|
35 |
+
}
|
eval_results.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"eval_accuracy": 0.9126143952510511,
|
3 |
+
"eval_f1": 0.8822372587580415,
|
4 |
+
"eval_loss": 0.33893020248889355,
|
5 |
+
"eval_runtime": 23.0122,
|
6 |
+
"eval_samples": 40430,
|
7 |
+
"eval_samples_per_second": 1756.896,
|
8 |
+
"eval_steps_per_second": 54.927
|
9 |
+
}
|
flax_model.msgpack
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7ff4a73eb797e20edaa5e9dc5a75331adf2e7a88102adce04dfe18357a85d255
|
3 |
+
size 90859750
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3308bd6349969d05c84b81f2c86c7cf1ceb3e42e1c52dce88a6f145e46e3bcd5
|
3 |
+
size 90886197
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
tf_model.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8147c45c202b0606622f0d215cab0361794e6fac0630a162c84d5d2d7c43f68a
|
3 |
+
size 91012752
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case": true, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": null, "tokenizer_file": null, "name_or_path": "microsoft/xtremedistil-l6-h384-uncased", "tokenizer_class": "BertTokenizer"}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5df791ca4045843aa8e2207199981158279521a22265bb70bcf74e98ddc75337
|
3 |
+
size 2697
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|