bert-tiny / README.md
1
---
2
language: 
3
  - en
4
5
license:
6
- mit  
7
  
8
tags:
9
- BERT
10
- MNLI
11
- NLI
12
- transformer
13
- pre-training
14
15
---
16
17
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). 
18
19
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
20
21
If you use the model, please consider citing both the papers:
22
```
23
@misc{bhargava2021generalization,
24
      title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, 
25
      author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
26
      year={2021},
27
      eprint={2110.01518},
28
      archivePrefix={arXiv},
29
      primaryClass={cs.CL}
30
}
31
32
@article{DBLP:journals/corr/abs-1908-08962,
33
  author    = {Iulia Turc and
34
               Ming{-}Wei Chang and
35
               Kenton Lee and
36
               Kristina Toutanova},
37
  title     = {Well-Read Students Learn Better: The Impact of Student Initialization
38
               on Knowledge Distillation},
39
  journal   = {CoRR},
40
  volume    = {abs/1908.08962},
41
  year      = {2019},
42
  url       = {http://arxiv.org/abs/1908.08962},
43
  eprinttype = {arXiv},
44
  eprint    = {1908.08962},
45
  timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
46
  biburl    = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
47
  bibsource = {dblp computer science bibliography, https://dblp.org}
48
}
49
50
```
51
Config of this model:
52
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
53
54
55
Other models to check out:
56
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
57
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
58
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
59
60
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
61
62
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
63