prajjwal1 commited on
Commit
008ccf2
1 Parent(s): 102cd86

added bibtex

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -1,4 +1,16 @@
1
  The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  You can check out:
4
  - `prajjwal1/bert-tiny` (L=2, H=128)
 
1
  The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
2
+ If you use the model, please consider citing the paper
3
+ ```
4
+ @misc{bhargava2021generalization,
5
+ title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
6
+ author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
7
+ year={2021},
8
+ eprint={2110.01518},
9
+ archivePrefix={arXiv},
10
+ primaryClass={cs.CL}
11
+ }
12
+ ```
13
+ Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
14
 
15
  You can check out:
16
  - `prajjwal1/bert-tiny` (L=2, H=128)