File size: 18,374 Bytes
f9ec070
c2d4eaa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9ec070
c2d4eaa
2c1a5ae
c2d4eaa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
---
annotations_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
language_creators:
- found
license:
- mit
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- legal documents
- corpus
- eurlex
- html
task_categories:
- text-classification
- fill-mask
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'SuperEURLEX: A Corpus of Plain Text and HTML from EURLEX, Annotated for multiple Legal Domain Text Classification Tasks.'
---

# Dataset Card for SuperEURLEX

This dataset contains over 4.6M Legal Documents from EURLEX with Annotations.
Over 3.7M of this 4.6M documents are also available in HTML format.
This dataset can be used for pretraining language models as well as for testing them on legal text classification tasks.

Use this dataset as follows:

```python
from datasets import load_dataset
config = "0.DE" # {sector}.{lang}[.html]
dataset = load_dataset("ddrg/super_eurlex", config, split='train')
```

## Dataset Details

### Dataset Description

This Dataset was scrapped from [EURLEX](https://eur-lex.europa.eu/homepage.html).
It contains more than 4.6M Legal Documents in Plain Text and over 3.7M In HTML Format.
Those Documents are separated by their language (This Dataset includes a total of 24 official European Languages) 
and by their Sector.


#### The Table below shows the number of documents per language:

|    |     Raw |    HTML |
|---:|--------:|--------:|
| BG |  29,778 |  27,718 |
| CS |  94,439 |  91,754 |
| DA | 398,559 | 300,488 |
| DE | 384,179 | 265,724 |
| EL | 167,502 | 117,009 |
| EN | 456,212 | 354,186 |
| ES | 253,821 | 201,400 |
| ET | 142,183 | 139,690 |
| FI | 238,143 | 214,206 |
| FR | 427,011 | 305,592 |
| GA |  19,673 |  19,437 |
| HR |  37,200 |  35,944 |
| HU |  69,275 |  66,334 |
| IT | 358,637 | 259,936 |
| LT |  62,975 |  61,139 |
| LV | 105,433 | 102,105 |
| MT |  46,695 |  43,969 |
| NL | 345,276 | 237,366 |
| PL | 146,502 | 143,490 |
| PT | 369,571 | 314,148 |
| RO |  47,398 |  45,317 |
| SK | 100,718 |  98,192 |
| SL | 170,583 | 166,646 |
| SV | 172,926 | 148,656 |


- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]

### Dataset Sources [optional]

- **Repository:** https://huggingface.co/datasets/ddrg/super_eurlex/tree/main
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]

## Uses

### As Corpus for:
- **Pretraining of Language Models with self supervised tasks** like Masked Language Modeling and Next Sentence Prediction
- Legal Text Analysis

### As Dataset for evaluation on the following task:
- *eurovoc*-Concepts Prediction i.e. which tags apply? (Muli-Label Classification (large Scale))
  - Example for this task is given[below
- *subject-matter* Prediction i.e. which other tags apply (Multi-Label Classification)
- *form* Classification i.e. What Kind of Document is it? (Multi-Class)
- And more

### Example for Use Of EUROVOC-Concepts

```python
from datasets import load_dataset
import transformers as tr
from sklearn.preprocessing import MultiLabelBinarizer
import numpy as np 
import evaluate
import uuid

# ==================== #
#     Prepare Data     #
# ==================== #
CONFIG = "3.EN" # {sector}.{lang}[.html]
MODEL_NAME = "distilroberta-base"
dataset = load_dataset("ddrg/super_eurlex", CONFIG, split='train')
tokenizer = tr.AutoTokenizer.from_pretrained(MODEL_NAME)

# Remove Unlabeled Columns
def remove_nulls(batch):
  return [(sample != None) for sample in batch["eurovoc"]]
dataset = dataset.filter(remove_nulls, batched=True, keep_in_memory=True)

# Tokenize Text
def tokenize(batch):
  return tokenizer(batch["text_cleaned"], truncation=True, padding="max_length")
# Keep in Memory is optional (The Dataset is large though and can easily use up alot of memory)
dataset = dataset.map(tokenize, batched=True, keep_in_memory=True)

# Create Label Column by encoding Eurovoc Concepts 
encoder = MultiLabelBinarizer()
# List of all Possible Labels 
eurovoc_concepts = dataset["eurovoc"]
encoder.fit(eurovoc_concepts)
def encode_labels(batch):
    batch["label"] = encoder.transform(batch["eurovoc"])
    return batch
dataset = dataset.map(encode_labels, batched=True, keep_in_memory=True)

# Split into train and Test set
dataset = dataset.train_test_split(0.2)

# ==================== #
#  Load & Train Model  #
# ==================== #
model = tr.AutoModelForSequenceClassification.from_pretrained(
    MODEL_NAME,
    num_labels=len(encoder.classes_),
    problem_type="multi_label_classification",
)

metric = evaluate.load("JP-SystemsX/nDCG", experiment_id=uuid.uuid4())
def compute_metric(eval_pred):
    predictions, labels = eval_pred
    return metric.compute(predictions=predictions, references=labels, k=5)

# Set Hyperparameter 
# Note: We stay mostly with default values to keep example short
# Though more hyperparameter should be set and tuned in praxis
train_args = tr.TrainingArguments(
    output_dir="./cache",
    per_device_train_batch_size=16,
    num_train_epochs=20
)
trainer = tr.Trainer(
    model=model,
    args=train_args,
    train_dataset=dataset["train"],
    compute_metrics=compute_metric,
)
trainer.train() # This will take a while
print(trainer.evaluate(dataset["test"]))
# >>> {'eval_loss': 0.0018887673504650593, 'eval_nDCG@5': 0.8072531683578489, 'eval_runtime': 663.8582, 'eval_samples_per_second': 32.373, 'eval_steps_per_second': 4.048, 'epoch': 20.0}
```


### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

[More Information Needed]

## Dataset Structure

This dataset is divided into multiple split by _Sector x Language x Format_

Sector refers to the kind of Document it belongs to:
- **0:** Consolidated acts
- **1:** Treaties 
- **2:** International agreements
- **3:** Legislation
- **4:** Complementary legislation
- **5:** Preparatory acts and working documents
- **6:** Case-law
- **7:** National transposition measures
- **8:** References to national case-law concerning EU law
- **9:** Parliamentary questions
- **C:** Other documents published in the Official Journal C series
- **E:** EFTA documents

Language refers to each of the 24 official European Languages that were included at the date of the dataset creation:
- BG ~ Bulgarian
- CS ~ Czech
- DA ~ Danish
- DE ~ German
- EL ~ Greek
- EN ~ English
- ES ~ Spanish
- ET ~ Estonian
- FI ~ Finnish
- FR ~ French
- GA ~ Irish
- HR ~ Croatian
- HU ~ Hungarian
- IT ~ Italian
- LT ~ Lithuanian
- LV ~ Latvian
- MT ~ Maltese
- NL ~ Dutch
- PL ~ Polish
- PT ~ Portuguese
- RO ~ Romanian
- SK ~ Slovak
- SL ~ Slovenian
- SV ~ Swedish

Format refers to plain Text (default) or HTML format (.html)
> Note: Plain Text contains generally more documents because not all documents were available in HTML format but those that were are included in both formats

Those Splits are named the following way:
`{sector}.{lang}[.html]` 

For Example:
- `3.EN` would be English legislative documents in plain text format 
- `3.EN.html` would be the same in HTML Format 

Each _Sector_ has its own set of meta data:

<details><summary>Sector 0 (Consolidated acts)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty

</p>
</details>

<details><summary>Sector 1 (Treaties)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _current_consolidated_version_ ~ date when this version of the document was consolidated `Format DD/MM/YYYY`
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information

</p>
</details>


<details><summary>Sector 2 (International agreements)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information
- _latest_consolidated_version_ ~ `Format DD/MM/YYYY`
- _current_consolidated_version_ ~ `Format DD/MM/YYYY`

</p>
</details>


<details><summary>Sector 3 (Legislation)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information
- _latest_consolidated_version_ ~ `Format DD/MM/YYYY`
- _current_consolidated_version_ ~ `Format DD/MM/YYYY`

</p>
</details>


<details><summary>Sector 4 (Complementary legislation)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information
- _latest_consolidated_version_ ~ `Format DD/MM/YYYY`
- _current_consolidated_version_ ~ `Format DD/MM/YYYY`

</p>
</details>


<details><summary>Sector 5 (Preparatory acts and working documents)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information
- _latest_consolidated_version_ ~ `Format DD/MM/YYYY`

</p>
</details>


<details><summary>Sector 6 (Case-law)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information
- _case-law_directory_code_before_lisbon_  ~ Classification system used for case law before Treaty of Lisbon came into effect (2009), each code reflects a particular area of EU law

</p>
</details>


<details><summary>Sector 7 (National transposition measures)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _transposed_legal_acts_ ~ national laws that exist in EU member states as a direct result of the need to comply with EU directives

</p>
</details>


<details><summary>Sector 8 (References to national case-law concerning EU law)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _case-law_directory_code_before_lisbon_  ~ Classification system used for case law before Treaty of Lisbon came into effect (2009), each code reflects a particular area of EU law
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information

</p>
</details>


<details><summary>Sector 9 (Parliamentary questions)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information

</p>
</details>


<details><summary>Sector C (Other documents published in the Official Journal C series)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information

</p>
</details>


<details><summary>Sector E (EFTA documents)</summary><p>

- _celex_id_ ~ Unique Identifier for each document 
- _text_cleaned_ (Plain Text) **or** _text_html_raw_ (HTML Format)
- _form_ ~ Kind of Document e.g. Consolidated text, or Treaty
- _directory_code_ ~ Information to structure documents in some kind of directory structure by topic e.g. `'03.50.30.00 Agriculture / Approximation of laws and health measures / Animal health and zootechnics'`
- _subject_matter_ ~ Keywords that provide general overview of content in a document see [here](https://eur-lex.europa.eu/content/e-learning/browsing_options.html) for more information
- _eurovoc_ ~ Keywords that describe document content based on the European Vocabulary see [here](https://eur-lex.europa.eu/browse/eurovoc.html) for more information

</p>
</details>


## Dataset Creation

### Curation Rationale

This dataset was created for the creation and/or evaluation of pretrained Legal Language Models.

### Source Data

#### Data Collection and Processing

We used the [EURLEX-Web-Scrapper Repo](https://github.com/JP-SystemsX/Eurlex-Web-Scrapper) for the data collection process.


#### Who are the source data producers?

The Source data stems from the [EURLEX-Website](https://eur-lex.europa.eu/) and was therefore produced by various entities within the European Union


#### Personal and Sensitive Information

No Personal or Sensitive Information is included to the best of our knowledge.

## Bias, Risks, and Limitations

- We removed HTML documents from which we couldn't extract plain text under the assumption that those are **corrupted files**.
However, we can't guarantee that we removed all. 
- The Extraction of plain text from legal HTML documents can lead to **formatting issues** 
e.g. the extraction of text from tables might mix up the order such that it becomes nearly incomprehensible.
- This dataset might contain many **missing values** in the meta-data columns as not every document was annotated in the same way

[More Information Needed]

### Recommendations

- Consider Removing rows with missing values for the task before training a model on it 

## Citation [optional]

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Dataset Card Authors [optional]

[More Information Needed]

## Dataset Card Contact

[More Information Needed]