Update README.md
Browse files
README.md
CHANGED
@@ -11,14 +11,16 @@ This model is initialized with the base BERT model (uncased, 110M parameters), [
|
|
11 |
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
|
12 |
|
13 |
### Citation
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
|
|
|
|
|
11 |
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
|
12 |
|
13 |
### Citation
|
14 |
+
```
|
15 |
+
@inproceedings{zhengguha2021,
|
16 |
+
title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
|
17 |
+
author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
|
18 |
+
year={2021},
|
19 |
+
eprint={2104.08671},
|
20 |
+
archivePrefix={arXiv},
|
21 |
+
primaryClass={cs.CL},
|
22 |
+
booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
|
23 |
+
publisher={Association for Computing Machinery},
|
24 |
+
note={(in press)}
|
25 |
+
}
|
26 |
+
```
|