Edit model card

Model Card for SaulLM-141B-BASE

image/jpeg

Note: This model is a research artifact and should be considered as such.

Model Details

Model Description

SaulLM-141B-BASE is a state-of-the-art language model specifically designed for the legal domain. It was developed through a collaboration between Equall and MICS at CentraleSupélec (Université Paris-Saclay) and aims to contribute to the advancement of LLMs specialized for legal work.

  • Developed by: Equall and MICS of CentraleSupélec (Université Paris-Saclay)
  • Model type: A 141 billion parameter model pretrained and finetuned for legal tasks, leveraging data from US and European legal databases.
  • Language(s) (NLP): English
  • License: MIT-License
  • Finetuned from model: Base model developed by Equall relying on continuous pretraining of Mixtral’s models.

Intended Uses & Limitations

Intended Uses

SaulLM-141B-BASE is intended to support further research and be adapted for various legal use cases it is the base model of SaulLM-54-Instruct.

Limitations

The information provided by the model is for informational purposes only and should not be interpreted as legal advice. Also, because SaulLM-54B-Base was pretrained with a focus on US and European legal systems, it may not perform as well on legal systems outside of those jurisdictions.

Bias, Risks, and Ethical Considerations

Bias and Risks

Despite efforts to mitigate bias, SaulLM-141B-Base may still exhibit biases inherent in its training data or otherwise provide inaccurate responses. The model is trained on information up to a certain point in time, and the model cannot account for all recent legal developments. Users should be cautious and critically evaluate the model's outputs, especially in sensitive legal cases. The responsibility for making decisions based on the information rests with the user, not the model or its developers. Users are encouraged to seek the assistance of qualified legal professionals where legal advice is needed.

Ethical Considerations

Users must use SaulLM-141B responsibly, ensuring that the model is not misused in a way that violates the law or infringes on the rights of others. Among other things, the model may not be used to generate harmful content, spread misinformation, or violate privacy or intellectual property rights.

Technical Details

Training Data

SaulLM-141B was trained on a rich dataset comprising European and US legal texts, court rulings, and legislative documents.

Citation

To reference SaulLM-141B in your work, please cite this model card.

@misc{colombo2024saullm54bsaullm141bscaling,
      title={SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain}, 
      author={Pierre Colombo and Telmo Pires and Malik Boudiaf and Rui Melo and Dominic Culver and Sofia Morgado and Etienne Malaboeuf and Gabriel Hautreux and Johanne Charpentier and Michael Desa},
      year={2024},
      eprint={2407.19584},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.19584}, 
}

Downloads last month
8
Safetensors
Model size
141B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Equall/SaulLM-141-Base

Finetunes
1 model
Quantizations
1 model

Dataset used to train Equall/SaulLM-141-Base

Collection including Equall/SaulLM-141-Base