Datasets:
language:
- fr
tags:
- france
- cnil
- loi
- deliberations
- decisions
- embeddings
- open-data
- government
pretty_name: CNIL Deliberations Dataset
size_categories:
- 10K<n<100K
license: etalab-2.0
configs:
- config_name: latest
data_files: data/cnil-latest/*.parquet
default: true
📢 Sondage 2026 : Utilisation des datasets publiques de MediaTech
Vous utilisez ce dataset ou d’autres datasets de notre collection MediaTech ? Votre avis compte ! Aidez-nous à améliorer nos datasets publiques en répondant à ce sondage rapide (5 min) : 👉 https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 Merci pour votre contribution ! 🙌
🇫🇷 CNIL Deliberations Dataset
This dataset is a processed and embedded version of the official deliberations and decisions published by the CNIL (Commission Nationale de l’Informatique et des Libertés), the French data protection authority.
It includes a variety of legal documents such as opinions, recommendations, simplified norms, general authorizations, and formal decisions.
The original data is downloaded from the dedicated DILA open data repository and the dataset is also available in data.gouv.fr (Les délibérations de la CNIL) .
The dataset provides semantic-ready, structured and chunked data making the dataset suitable for semantic search, AI legal assistants, or RAG pipelines for example.
These chunks have then been embedded using the BAAI/bge-m3 model.
🗂️ Dataset Contents
The dataset is provided in Parquet format and includes the following columns:
| Column Name | Type | Description |
|---|---|---|
chunk_id |
str |
Unique identifier for each chunk. |
doc_id |
str |
Document identifier of the deliberation. |
chunk_index |
int |
Index of the chunk within the same deliberation document. Starting from 1. |
chunk_xxh64 |
str |
XXH64 hash of the chunk_text value. |
nature |
str |
Type of act (e.g., deliberation, decision...). |
status |
str |
Status of the document (e.g., vigueur, vigueur_diff). |
nature_delib |
str |
Specific nature of the deliberation. |
title |
str |
Title of the deliberation or decision. |
full_title |
str |
Full title of the deliberation or decision. |
number |
str |
Official reference number. |
date |
str |
Date of publication (format: YYYY-MM-DD). |
text |
str |
Raw text content of the chunk extracted from the deliberation or decision |
chunk_text |
str |
Formatted text chunk used for embedding (includes title + content). |
embeddings_bge-m3 |
str |
Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON string. |
🛠️ Data Processing Methodology
1. 📥 Field Extraction
Data was extracted from the the dedicated DILA open data repository.
The following transformations were applied:
- Basic fields:
doc_id(cid),title,full_title,number,date,nature,status,nature_delib, were taken directly from the source XML file. - Generated fields:
chunk_id: a generated unique identifier combining thedoc_idandchunk_index.chunk_index: is the index of the chunk of a same deliberation document. Each document has an uniquedoc_id.chunk_xxh64: is the xxh64 hash of thechunk_textvalue. It is useful to determine if thechunk_textvalue has changed from a version to another.
- Textual fields:
text: Chunk of the main text content.chunk_text: Combinestitleand the maintextbody to maximize embedding relevance.
2. ✂️ Text Chunking
The value includes the title and the textual content chunk text.
This strategy is designed to improve semantic search for document search use cases on administrative procedures.
The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks (text value). The parameters used are :
chunk_size= 1500chunk_overlap= 200length_function= len
🧠 3. Embeddings Generation
Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.
🎓 Tutorials
🔄 1. The chunking doesn't fit your use case?
If you need to reconstitute the original, un-chunked dataset, you can follow this tutorial notebook available on our GitHub repository.
⚠️ The tutorial is only relevant for datasets that were chunked without overlap.
🤖 2. How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?
To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our step-by-step RAG tutorial available on our GitHub repository !
📌 3. Embedding Use Notice
⚠️ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]").
To use it as a vector, you need to parse it into a list of floats or NumPy array.
Using the datasets library:
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
dataset = load_dataset("AgentPublic/cnil")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
Using downloaded local Parquet files:
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
df = pd.read_parquet(path="cnil-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.
🐱 GitHub repository :
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository
📚 Source & License
🔗 Source :
📄 Licence :
Open License (Etalab) — This dataset is publicly available and can be reused under the conditions of the Etalab open license.