zlucia commited on
Commit
8d55ad0
1 Parent(s): 0b9e19e

Update model card

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -5,20 +5,20 @@ Model and tokenizer files for Legal-BERT model from [When Does Pretraining Help?
5
  The pretraining corpus was constructed by ingesting the entire Harvard Law case corpus from 1965 to the present (https://case.law/). The size of this corpus (37GB) is substantial, representing 3,446,187 legal decisions across all federal and state courts, and is larger than the size of the BookCorpus/Wikipedia corpus originally used to train BERT (15GB).
6
 
7
  ### Training Objective
8
- This model is initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) and trained for an additional 1M steps on the MLM and NSP objective, with tokenization and sentence segmentation adapted for legal text (cf. the paper).
9
 
10
  ### Usage
11
  Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
12
 
13
  ### Citation
14
- @inproceedings{zhengguha2021,
15
- title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
16
- author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
17
- year={2021},
18
- eprint={2104.08671},
19
- archivePrefix={arXiv},
20
- primaryClass={cs.CL},
21
- booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
22
- publisher={Association for Computing Machinery},
23
- note={(in press)}
24
- }
 
5
  The pretraining corpus was constructed by ingesting the entire Harvard Law case corpus from 1965 to the present (https://case.law/). The size of this corpus (37GB) is substantial, representing 3,446,187 legal decisions across all federal and state courts, and is larger than the size of the BookCorpus/Wikipedia corpus originally used to train BERT (15GB).
6
 
7
  ### Training Objective
8
+ This model is initialized with the base BERT model (uncased, 110M parameters), [bert-base-uncased](https://huggingface.co/bert-base-uncased), and trained for an additional 1M steps on the MLM and NSP objective, with tokenization and sentence segmentation adapted for legal text (cf. the paper).
9
 
10
  ### Usage
11
  Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
12
 
13
  ### Citation
14
+ \t@inproceedings{zhengguha2021,
15
+ \t\ttitle={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
16
+ \t\tauthor={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
17
+ \t\tyear={2021},
18
+ \t\teprint={2104.08671},
19
+ \t\tarchivePrefix={arXiv},
20
+ \t\tprimaryClass={cs.CL},
21
+ \t\tbooktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
22
+ \t\tpublisher={Association for Computing Machinery},
23
+ \t\tnote={(in press)}
24
+ \t}