The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for SuperEURLEX

This dataset contains over 4.6M Legal Documents from EURLEX with Annotations. Over 3.7M of this 4.6M documents are also available in HTML format. This dataset can be used for pretraining language models as well as for testing them on legal text classification tasks.

Use this dataset as follows:

from datasets import load_dataset
config = "0.DE" # {sector}.{lang}[.html]
dataset = load_dataset("ddrg/super_eurlex", config, split='train')

Dataset Details

Dataset Description

This Dataset was scrapped from EURLEX. It contains more than 4.6M Legal Documents in Plain Text and over 3.7M In HTML Format. Those Documents are separated by their language (This Dataset includes a total of 24 official European Languages) and by their Sector.

The Table below shows the number of documents per language:

Raw HTML
BG 29,778 27,718
CS 94,439 91,754
DA 398,559 300,488
DE 384,179 265,724
EL 167,502 117,009
EN 456,212 354,186
ES 253,821 201,400
ET 142,183 139,690
FI 238,143 214,206
FR 427,011 305,592
GA 19,673 19,437
HR 37,200 35,944
HU 69,275 66,334
IT 358,637 259,936
LT 62,975 61,139
LV 105,433 102,105
MT 46,695 43,969
NL 345,276 237,366
PL 146,502 143,490
PT 369,571 314,148
RO 47,398 45,317
SK 100,718 98,192
SL 170,583 166,646
SV 172,926 148,656
  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]

Dataset Sources [optional]

Uses

As Corpus for:

  • Pretraining of Language Models with self supervised tasks like Masked Language Modeling and Next Sentence Prediction
  • Legal Text Analysis

As Dataset for evaluation on the following task:

  • eurovoc-Concepts Prediction i.e. which tags apply? (Muli-Label Classification (large Scale))
    • Example for this task is given[below
  • subject-matter Prediction i.e. which other tags apply (Multi-Label Classification)
  • form Classification i.e. What Kind of Document is it? (Multi-Class)
  • And more

Example for Use Of EUROVOC-Concepts

from datasets import load_dataset
import transformers as tr
from sklearn.preprocessing import MultiLabelBinarizer
import numpy as np 
import evaluate
import uuid

# ==================== #
#     Prepare Data     #
# ==================== #
CONFIG = "3.EN" # {sector}.{lang}[.html]
MODEL_NAME = "distilroberta-base"
dataset = load_dataset("ddrg/super_eurlex", CONFIG, split='train')
tokenizer = tr.AutoTokenizer.from_pretrained(MODEL_NAME)

# Remove Unlabeled Columns
def remove_nulls(batch):
  return [(sample != None) for sample in batch["eurovoc"]]
dataset = dataset.filter(remove_nulls, batched=True, keep_in_memory=True)

# Tokenize Text
def tokenize(batch):
  return tokenizer(batch["text_cleaned"], truncation=True, padding="max_length")
# Keep in Memory is optional (The Dataset is large though and can easily use up alot of memory)
dataset = dataset.map(tokenize, batched=True, keep_in_memory=True)

# Create Label Column by encoding Eurovoc Concepts 
encoder = MultiLabelBinarizer()
# List of all Possible Labels 
eurovoc_concepts = dataset["eurovoc"]
encoder.fit(eurovoc_concepts)
def encode_labels(batch):
    batch["label"] = encoder.transform(batch["eurovoc"])
    return batch
dataset = dataset.map(encode_labels, batched=True, keep_in_memory=True)

# Split into train and Test set
dataset = dataset.train_test_split(0.2)

# ==================== #
#  Load & Train Model  #
# ==================== #
model = tr.AutoModelForSequenceClassification.from_pretrained(
    MODEL_NAME,
    num_labels=len(encoder.classes_),
    problem_type="multi_label_classification",
)

metric = evaluate.load("JP-SystemsX/nDCG", experiment_id=uuid.uuid4())
def compute_metric(eval_pred):
    predictions, labels = eval_pred
    return metric.compute(predictions=predictions, references=labels, k=5)

# Set Hyperparameter 
# Note: We stay mostly with default values to keep example short
# Though more hyperparameter should be set and tuned in praxis
train_args = tr.TrainingArguments(
    output_dir="./cache",
    per_device_train_batch_size=16,
    num_train_epochs=20
)
trainer = tr.Trainer(
    model=model,
    args=train_args,
    train_dataset=dataset["train"],
    compute_metrics=compute_metric,
)
trainer.train() # This will take a while
print(trainer.evaluate(dataset["test"]))
# >>> {'eval_loss': 0.0018887673504650593, 'eval_nDCG@5': 0.8072531683578489, 'eval_runtime': 663.8582, 'eval_samples_per_second': 32.373, 'eval_steps_per_second': 4.048, 'epoch': 20.0}

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset is divided into multiple split by Sector x Language x Format

Sector refers to the kind of Document it belongs to:

  • 0: Consolidated acts
  • 1: Treaties
  • 2: International agreements
  • 3: Legislation
  • 4: Complementary legislation
  • 5: Preparatory acts and working documents
  • 6: Case-law
  • 7: National transposition measures
  • 8: References to national case-law concerning EU law
  • 9: Parliamentary questions
  • C: Other documents published in the Official Journal C series
  • E: EFTA documents

Language refers to each of the 24 official European Languages that were included at the date of the dataset creation:

  • BG ~ Bulgarian
  • CS ~ Czech
  • DA ~ Danish
  • DE ~ German
  • EL ~ Greek
  • EN ~ English
  • ES ~ Spanish
  • ET ~ Estonian
  • FI ~ Finnish
  • FR ~ French
  • GA ~ Irish
  • HR ~ Croatian
  • HU ~ Hungarian
  • IT ~ Italian
  • LT ~ Lithuanian
  • LV ~ Latvian
  • MT ~ Maltese
  • NL ~ Dutch
  • PL ~ Polish
  • PT ~ Portuguese
  • RO ~ Romanian
  • SK ~ Slovak
  • SL ~ Slovenian
  • SV ~ Swedish

Format refers to plain Text (default) or HTML format (.html)

Note: Plain Text contains generally more documents because not all documents were available in HTML format but those that were are included in both formats

Those Splits are named the following way: {sector}.{lang}[.html]

For Example:

  • 3.EN would be English legislative documents in plain text format
  • 3.EN.html would be the same in HTML Format

Each Sector has its own set of meta data:

Sector 0 (Consolidated acts)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty

Sector 1 (Treaties)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • current_consolidated_version ~ date when this version of the document was consolidated Format DD/MM/YYYY
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information

Sector 2 (International agreements)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information
  • latest_consolidated_version ~ Format DD/MM/YYYY
  • current_consolidated_version ~ Format DD/MM/YYYY

Sector 3 (Legislation)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information
  • latest_consolidated_version ~ Format DD/MM/YYYY
  • current_consolidated_version ~ Format DD/MM/YYYY

Sector 4 (Complementary legislation)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information
  • latest_consolidated_version ~ Format DD/MM/YYYY
  • current_consolidated_version ~ Format DD/MM/YYYY

Sector 5 (Preparatory acts and working documents)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information
  • latest_consolidated_version ~ Format DD/MM/YYYY

Sector 6 (Case-law)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information
  • case-law_directory_code_before_lisbon ~ Classification system used for case law before Treaty of Lisbon came into effect (2009), each code reflects a particular area of EU law

Sector 7 (National transposition measures)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • transposed_legal_acts ~ national laws that exist in EU member states as a direct result of the need to comply with EU directives

Sector 8 (References to national case-law concerning EU law)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • case-law_directory_code_before_lisbon ~ Classification system used for case law before Treaty of Lisbon came into effect (2009), each code reflects a particular area of EU law
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information

Sector 9 (Parliamentary questions)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information

Sector C (Other documents published in the Official Journal C series)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information

Sector E (EFTA documents)

  • celex_id ~ Unique Identifier for each document
  • text_cleaned (Plain Text) or text_html_raw (HTML Format)
  • form ~ Kind of Document e.g. Consolidated text, or Treaty
  • directory_code ~ Information to structure documents in some kind of directory structure by topic e.g. '03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'
  • subject_matter ~ Keywords that provide general overview of content in a document see here for more information
  • eurovoc ~ Keywords that describe document content based on the European Vocabulary see here for more information

Dataset Creation

Curation Rationale

This dataset was created for the creation and/or evaluation of pretrained Legal Language Models.

Source Data

Data Collection and Processing

We used the EURLEX-Web-Scrapper Repo for the data collection process.

Who are the source data producers?

The Source data stems from the EURLEX-Website and was therefore produced by various entities within the European Union

Personal and Sensitive Information

No Personal or Sensitive Information is included to the best of our knowledge.

Bias, Risks, and Limitations

  • We removed HTML documents from which we couldn't extract plain text under the assumption that those are corrupted files. However, we can't guarantee that we removed all.
  • The Extraction of plain text from legal HTML documents can lead to formatting issues e.g. the extraction of text from tables might mix up the order such that it becomes nearly incomprehensible.
  • This dataset might contain many missing values in the meta-data columns as not every document was annotated in the same way

[More Information Needed]

Recommendations

  • Consider Removing rows with missing values for the task before training a model on it

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]

Downloads last month
517
Edit dataset card