Transformers
PyTorch
English
longformer
Inference Endpoints
dgiofre's picture
Update README.md
0f46842
|
raw
history blame
6.66 kB
metadata
datasets:
  - pile-of-law/pile-of-law
language:
  - en
library_name: transformers
license: other

Model Card for budgetlongformer-diverse

Legal pretrained model using Replaced Token Detection (RTD) task, trained on Pile-of-Law dataset with 4096 tokens as context windows.

Model Details

Model Description

Legal pretrained model using ELECTRA objective task, trained on Pile-of-Law dataset with 4096 tokens as context windows.

  • Developed by: Joel Niklaus, Daniele Giofré

  • Model type: Language model

  • Language(s) (NLP): en

  • License: other

  • Resources for more information:

Uses

Direct Use

The model can directly be used to generate embeddings for example for similarity search. It likely works best on US focused legal data.

Downstream Use

The model can be finetuned for any NLU task or when coupled with a decoder also for generative tasks. In our experiments on summarization with the BillSum dataset, we found that random initialization of the decoder improved performance.

Out-of-Scope Use

This model will likely work worse on non-legal text in non-English languages originating from outside the US.

Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

Recommendations

As with any large LM there is the risk of it producing biased or unfair output. Researchers using the model should put into place respective safeguards to identify biased and/or toxic language.

Training Details

Training Data

The diverse model was trained on caselaw (“Court Listener Opinions” & “Court Listener Docket Entry Documents”), legislation (“US Code”, “State Codes” & “EURLEX”) and contracts (“Atticus Contracts” & “EDGAR Contracts”) from public Pile-of-Law dataset. To balance the training data, we limited the number of documents to 500K (this affects Court Listener Opinions, Court Listener Docket Entry Documents and EDGAR Contracts).

Training Procedure

Preprocessing

More information needed

Speeds, Sizes, Times

More information needed

Evaluation

Testing Data, Factors & Metrics

Testing Data

We tested the model on the BillSum and PubMed summarization datasets achieving SotA Rouge scores for the respective parameter sizes in August 2022.

Factors

More information needed

Metrics

We followed the standard in research on summarization datasets and used Rouge 1, 2 and L.

Results

More information needed

Environmental Impact

Carbon emissions estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 4 x 16GB NVIDIA V100
  • Hours used: 144
  • Cloud Provider: AWS
  • Compute Region: US East
  • Carbon Emitted: 15.98

Model Architecture and Objective

We used a Longformer attention window of 256 as generator and discriminator. The generator model was three times smaller than the discriminator model. In particular, the generator’s depth (number of hidden layers) instead of its width (embedding size, hidden size and intermediate size). We used a MLM probability of 25% for the generator.

Compute Infrastructure

Amazon SageMaker Notebooks.

Hardware

4 x 16GB NVIDIA V100

Software

transformers

Citation

BibTeX:

@misc{niklaus2022budgetlongformer, title={BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model From Scratch?}, author={Joel Niklaus and Daniele Giofré}, year={2022}, eprint={2211.17135}, archivePrefix={arXiv}, primaryClass={cs.CL} }

Model Card Authors

Joel Niklaus, Daniele Giofré