Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,8 @@ The model also uses a custom domain-specific legal vocabulary. The vocabulary se
|
|
13 |
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
|
14 |
|
15 |
### Citation
|
16 |
-
|
|
|
17 |
title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
|
18 |
author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
|
19 |
year={2021},
|
@@ -23,4 +24,5 @@ Please see the [casehold repository](https://github.com/reglab/casehold) for scr
|
|
23 |
booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
|
24 |
publisher={Association for Computing Machinery},
|
25 |
note={(in press)}
|
26 |
-
|
|
|
|
13 |
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
|
14 |
|
15 |
### Citation
|
16 |
+
```
|
17 |
+
@inproceedings{zhengguha2021,
|
18 |
title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
|
19 |
author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
|
20 |
year={2021},
|
|
|
24 |
booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
|
25 |
publisher={Association for Computing Machinery},
|
26 |
note={(in press)}
|
27 |
+
}
|
28 |
+
```
|