language:
- fr
license: cc-by-4.0
task_categories:
- question-answering
- text-generation
- table-question-answering
pretty_name: The Laws, centralizing legal texts for better use
dataset_info:
- config_name: fr
features:
- name: jurisdiction
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: html
dtype: string
- name: title_main
dtype: string
- name: title_alternative
dtype: string
- name: id_sub
dtype: string
- name: id_main
dtype: string
- name: url_sourcepage
dtype: string
- name: url_sourcefile
dtype: 'null'
- name: date_publication
dtype: string
- name: date_signature
dtype: 'null'
- name: uuid
dtype: string
- name: text_hash
dtype: string
splits:
- name: train
num_bytes: 531412652
num_examples: 162702
download_size: 212898761
dataset_size: 531412652
configs:
- config_name: fr
data_files:
- split: train
path: fr/train-*
tags:
- legal
- droit
- fiscalité
- taxation
- δεξιά
- recht
- derecho
Dataset Description
- Repository: https://huggingface.co/datasets/HFforLegal/laws
- Leaderboard: N/A
- Point of Contact: Louis Brulé Naudet
The Laws, centralizing legal texts for better use, a community Dataset.
The Laws Dataset is a comprehensive collection of legal texts from various countries, centralized in a common format. This dataset aims to improve the development of legal AI models by providing a standardized, easily accessible corpus of global legal documents.
Join us in our mission to make AI more accessible and understandable for the legal world, ensuring that the power of language models can be harnessed effectively and ethically in the pursuit of justice.
Objective
The primary objective of this dataset is to centralize laws from around the world in a common format, thereby facilitating:
- Comparative legal studies
- Development of multilingual legal AI models
- Cross-jurisdictional legal research
- Improvement of legal technology tools
By providing a standardized dataset of global legal texts, we aim to accelerate the development of AI models in the legal domain, enabling more accurate and comprehensive legal analysis across different jurisdictions.
Dataset Structure
The dataset contains the following columns:
- jurisdiction: Capitalized ISO 3166-1 alpha-2 code representing the country or jurisdiction. This column is useful when stacking data from different jurisdictions.
- language: Non-capitalized ISO 639-1 code representing the language of the document. This is particularly useful for multilingual jurisdictions.
- text: The main textual content of the document.
- html: An HTML-structured version of the text. This may include additional structure such as XML (Akoma Ntoso).
- title_main: The primary title of the document. This replaces the 'book' column, as many modern laws are not structured or referred to as 'books'.
- title_alternative: A list of official and non-official (nickname) titles for the document.
- id_sub: Identifier for lower granularity items within the document, such as specific article numbers. This replaces the 'id' column.
- id_main: Document identifier for the main document, such as the European Legislation Identifier (ELI).
- url_sourcepage: The source URL of the web page where the document is published.
- url_sourcefile: The source URL of the document file (e.g., PDF file).
- date_publication: The date when the document was published.
- date_signature: The date when the document was signed.
- uuid: A universally unique identifier for each row in the dataset.
- text_hash: A SHA-256 hash of the 'text' column, useful for verifying data integrity.
- formatted_date: The publication date formatted as 'YYYY-MM-DD HH:MM:SS', derived from the 'date_publication' column.
This structure ensures comprehensive metadata for each legal document, facilitating easier data management, cross-referencing, and analysis across different jurisdictions and languages.
Easy-to-use script for hashing the text
:
Datasets version:
import hashlib
import datasets
def hash(
text: str
) -> str:
"""
Create or update the hash of the document content.
This function takes a text input, converts it to a string, encodes it in UTF-8,
and then generates a SHA-256 hash of the encoded text.
Parameters
----------
text : str
The text content to be hashed.
Returns
-------
str
The SHA-256 hash of the input text, represented as a hexadecimal string.
"""
return hashlib.sha256(str(text).encode()).hexdigest()
dataset = dataset.map(lambda x: {"hash": hash(x["document"])})
Polars version:
import polars as pl
import hashlib
def add_text_hash_column(
df: pl.DataFrame,
text_column: str = "text",
hash_column: str = "text_hash"
) -> pl.DataFrame:
"""
Add a column with SHA-256 hash values of a specified text column to a Polars DataFrame.
This function computes the SHA-256 hash of the values in the specified text column
and adds it as a new column to the DataFrame.
Parameters
----------
df : pl.DataFrame
The input Polars DataFrame.
text_column : str, optional
The name of the column containing the text to be hashed (default is "text").
hash_column : str, optional
The name of the new column to be added with hash values (default is "text_hash").
Returns
-------
pl.DataFrame
A new DataFrame with the hash column added.
Examples
--------
>>> import polars as pl
>>> df = pl.DataFrame({"text": ["Hello", "World", "OpenAI"]})
>>> df_with_hash = add_text_hash_column(df)
>>> print(df_with_hash)
shape: (3, 2)
┌────────┬──────────────────────────────────────────────────────────────────┐
│ text ┆ text_hash │
│ --- ┆ --- │
│ str ┆ str │
╞════════╪══════════════════════════════════════════════════════════════════╡
│ Hello ┆ 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969 │
│ World ┆ 78ae647dc5544d227130a0682a51e30bc7777fbb6d8a8f17007463a3ecd1d524 │
│ OpenAI ┆ 0c3f4a61f7e5d29abc29d63f1a0cad36c49ffd5b5c0b5b38ce0c7aa0bdc94696 │
└────────┴──────────────────────────────────────────────────────────────────┘
Raises
------
ValueError
If the specified text_column is not found in the DataFrame.
"""
if text_column not in df.columns:
raise ValueError(f"Column '{text_column}' not found in the DataFrame.")
return df.with_columns(
pl.col(text_column)
.map_elements(lambda x: hashlib.sha256(str(x).encode()).hexdigest())
.alias(hash_column)
)
dataframe = add_text_hash_column(dataframe)
Upload to the Hub
Here is a code snippet to push dedicated config (in this example for France) to the HF hub:
hf_dataset.push_to_hub(
repo_id="HFforLegal/laws-new",
config_name="fr",
split="train",
)
Country-based Configs
The dataset uses country-based configs to organize legal documents from different jurisdictions. Each config is identified by the ISO 3166-1 alpha-2 code of the corresponding country.
ISO 3166-1 alpha-2 Codes
ISO 3166-1 alpha-2 codes are two-letter country codes defined in ISO 3166-1, part of the ISO 3166 standard published by the International Organization for Standardization (ISO).
Some examples of ISO 3166-1 alpha-2 codes:
- France: fr
- United States: us
- United Kingdom: gb
- Germany: de
- Japan: jp
- Brazil: br
- Australia: au
Before submitting a new split, please make sure the proposed split fits within the ISO code for the related country.
Ethical Considerations
While this dataset provides a valuable resource for legal AI development, users should be aware of the following ethical considerations:
- Privacy: Ensure that all personal information has been properly anonymized.
- Bias: Be aware of potential biases in the source material and in the selection of included laws.
- Currency: Laws change over time. Always verify that you're working with the most up-to-date version of a law for any real-world application.
- Jurisdiction: Legal interpretations can vary by jurisdiction. AI models trained on this data should not be used as a substitute for professional legal advice.
Citing & Authors
If you use this dataset in your research, please use the following BibTeX entry.
@misc{HFforLegal2024,
author = {Louis Brulé Naudet},
title = {The Laws, centralizing legal texts for better use},
year = {2024}
howpublished = {\url{https://huggingface.co/datasets/HFforLegal/laws}},
}
Feedback
If you have any feedback, please reach out at louisbrulenaudet@icloud.com.