EU_Wikipedias / README.md
Joelito's picture
added dataset card
cfd4f87
|
raw
history blame
No virus
4.08 kB
metadata
annotations_creators:
  - other
language_creators:
  - found
language:
  - bg
  - cs
  - da
  - de
  - el
  - en
  - es
  - et
  - fi
  - fr
  - ga
  - hr
  - hu
  - it
  - lt
  - lv
  - mt
  - nl
  - pl
  - pt
  - ro
  - sk
  - sl
  - sv
license:
  - cc-by-4.0
multilinguality:
  - multilingual
paperswithcode_id: null
pretty_name: 'EUWikipedias: A dataset of Wikipedias in the EU languages'
size_categories:
  - 10M<n<100M
source_datasets:
  - original
task_categories:
  - fill-mask

Dataset Card for EUWikipedias: A dataset of Wikipedias in the EU languages

Table of Contents

Dataset Description

  • Homepage:
  • Repository:
  • Paper:
  • Leaderboard:
  • Point of Contact: Joel Niklaus

Dataset Summary

Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).

Supported Tasks and Leaderboards

The dataset supports the tasks of fill-mask.

Languages

The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv

Dataset Structure

It is structured in the following format: {date}/{language}_{shard}.jsonl.xz At the moment only the date '20221120' is supported.

Use the dataset like this:

from datasets import load_dataset

dataset = load_dataset('joelito/EU_Wikipedias', date="20221120", language="de", split='train', streaming=True)

Data Instances

The file format is jsonl.xz and there is one split available (train).

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

This dataset has been created by downloading the wikipedias using olm/wikipedia for the 24 EU languages. For more information about the creation of the dataset please refer to prepare_wikipedias.py

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

TODO add citation

Contributions

Thanks to @JoelNiklaus for adding this dataset.