Joelito's picture
added dataloader, preparation script and dataset card
931df01
metadata
annotations_creators:
  - other
language_creators:
  - found
language:
  - bg
  - cs
  - da
  - de
  - el
  - en
  - es
  - et
  - fi
  - fr
  - ga
  - hr
  - hu
  - it
  - lt
  - lv
  - mt
  - nl
  - pl
  - pt
  - ro
  - sk
  - sl
  - sv
license:
  - cc-by-4.0
multilinguality:
  - multilingual
paperswithcode_id: null
pretty_name: >-
  MultiLegalPile_Wikipedia_Filtered: A filtered version of the MultiLegalPile
  dataset, together with wikipedia articles.
size_categories:
  - 10M<n<100M
source_datasets:
  - original
task_categories:
  - fill-mask

Dataset Card for MultiLegalPile_Wikipedia_Filtered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles

Table of Contents

Dataset Description

  • Homepage:
  • Repository:
  • Paper:
  • Leaderboard:
  • Point of Contact: Joel Niklaus

Dataset Summary

The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models. It spans over 24 languages and four legal text types.

Supported Tasks and Leaderboards

The dataset supports the tasks of fill-mask.

Languages

The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv

Dataset Structure

It is structured in the following format: {language}{text_type}{shard}.jsonl.xz

text_type is one of the following:

  • caselaw
  • contracts
  • legislation
  • other
  • wikipedia

Use the dataset like this:

from datasets import load_dataset

config = 'en_contracts' # {language}_{text_type}
dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='train', streaming=True)

'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'. To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., ' all_legislation').

Data Instances

The file format is jsonl.xz and there is a train and validation split available. Since some configurations are very small or non-existent, they might not contain a train split or not be present at all.

The complete dataset consists of five large subsets:

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

This dataset has been created by combining the following datasets: Native Multi Legal Pile, Eurlex Resources, MC4 Legal, Pile of Law, EU Wikipedias. It has been filtered to remove short documents (less than 64 whitespace-separated tokens) and documents with more than 30% punctuation or numbers (see prepare_legal_data.py for more details).

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

TODO add citation

Contributions

Thanks to @JoelNiklaus for adding this dataset.