zlucia commited on
Commit
e7bff2b
1 Parent(s): 8d55ad0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -11
README.md CHANGED
@@ -11,14 +11,16 @@ This model is initialized with the base BERT model (uncased, 110M parameters), [
11
  Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
12
 
13
  ### Citation
14
- \t@inproceedings{zhengguha2021,
15
- \t\ttitle={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
16
- \t\tauthor={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
17
- \t\tyear={2021},
18
- \t\teprint={2104.08671},
19
- \t\tarchivePrefix={arXiv},
20
- \t\tprimaryClass={cs.CL},
21
- \t\tbooktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
22
- \t\tpublisher={Association for Computing Machinery},
23
- \t\tnote={(in press)}
24
- \t}
 
 
 
11
  Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
12
 
13
  ### Citation
14
+ ```
15
+ @inproceedings{zhengguha2021,
16
+ title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
17
+ author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
18
+ year={2021},
19
+ eprint={2104.08671},
20
+ archivePrefix={arXiv},
21
+ primaryClass={cs.CL},
22
+ booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
23
+ publisher={Association for Computing Machinery},
24
+ note={(in press)}
25
+ }
26
+ ```