julien-c HF staff commited on
Commit
43e9576
1 Parent(s): 14115f4

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/canwenxu/BERT-of-Theseus-MNLI/README.md

Files changed (1) hide show
  1. README.md +24 -0
README.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png
3
+ datasets:
4
+ - multi_nli
5
+ ---
6
+
7
+ # BERT-of-Theseus
8
+ See our paper ["BERT-of-Theseus: Compressing BERT by Progressive Module Replacing"](http://arxiv.org/abs/2002.02925).
9
+
10
+ BERT-of-Theseus is a new compressed BERT by progressively replacing the components of the original BERT.
11
+
12
+ ![BERT of Theseus](https://github.com/JetRunner/BERT-of-Theseus/blob/master/bert-of-theseus.png?raw=true)
13
+
14
+ ## Load Pretrained Model on MNLI
15
+
16
+ We provide a 6-layer pretrained model on MNLI as a general-purpose model, which can transfer to other sentence classification tasks, outperforming DistillBERT (with the same 6-layer structure) on six tasks of GLUE (dev set).
17
+
18
+ | Method | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B |
19
+ |-----------------|------|------|------|------|------|-------|-------|
20
+ | BERT-base | 83.5 | 89.5 | 91.2 | 89.8 | 71.1 | 91.5 | 88.9 |
21
+ | DistillBERT | 79.0 | 87.5 | 85.3 | 84.9 | 59.9 | 90.7 | 81.2 |
22
+ | BERT-of-Theseus | 82.1 | 87.5 | 88.8 | 88.8 | 70.1 | 91.8 | 87.8 |
23
+
24
+ Please Note: this checkpoint is for [Intermediate-Task Transfer Learning](https://arxiv.org/abs/2005.00628) so it does not include the classification head for MNLI! Please fine-tune it before use (like DistilBERT).