harshil10 commited on
Commit
b4e3675
1 Parent(s): a416b2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -31
README.md CHANGED
@@ -29,34 +29,3 @@ If you use the model, please consider citing both the papers:
29
  primaryClass={cs.CL}
30
  }
31
 
32
- @article{DBLP:journals/corr/abs-1908-08962,
33
- author = {Iulia Turc and
34
- Ming{-}Wei Chang and
35
- Kenton Lee and
36
- Kristina Toutanova},
37
- title = {Well-Read Students Learn Better: The Impact of Student Initialization
38
- on Knowledge Distillation},
39
- journal = {CoRR},
40
- volume = {abs/1908.08962},
41
- year = {2019},
42
- url = {http://arxiv.org/abs/1908.08962},
43
- eprinttype = {arXiv},
44
- eprint = {1908.08962},
45
- timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
46
- biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
47
- bibsource = {dblp computer science bibliography, https://dblp.org}
48
- }
49
-
50
- ```
51
- Config of this model:
52
- - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
53
-
54
-
55
- Other models to check out:
56
- - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
57
- - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
58
- - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
59
-
60
- Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
61
-
62
- Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
 
29
  primaryClass={cs.CL}
30
  }
31