Lucia Zheng commited on
Commit
6499d7d
1 Parent(s): 27a7fe8

Update model card

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Custom Legal-BERT
2
+ Model and tokenizer files for Custom Legal-BERT model from [When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset](https://arxiv.org/abs/2104.08671).
3
+
4
+ ### Training Data
5
+ The pretraining corpus was constructed by ingesting the entire Harvard Law case corpus from 1965 to the present (https://case.law/). The size of this corpus (37GB) is substantial, representing 3,446,187 legal decisions across all federal and state courts, and is larger than the size of the BookCorpus/Wikipedia corpus originally used to train BERT (15GB).
6
+
7
+ ### Training Objective
8
+ This model is pretrained from scratch for 2M steps on the MLM and NSP objective, with tokenization and sentence segmentation adapted for legal text (cf. the paper).
9
+
10
+ The model also uses a custom domain-specific legal vocabulary. The vocabulary set is constructed using [SentencePiece](https://arxiv.org/abs/1808.06226) on a subsample (approx. 13M) of sentences from our pretraining corpus, with the number of tokens fixed to 32,000.
11
+
12
+ ### Usage
13
+ Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
14
+
15
+ ### Citation
16
+ @inproceedings{zhengguha2021,
17
+ title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
18
+ author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
19
+ year={2021},
20
+ eprint={2104.08671},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CL},
23
+ booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
24
+ publisher={Association for Computing Machinery},
25
+ note={(in press)}
26
+ }