wikipedia / README.md
albertvillanova's picture
Update Wikipedia metadata (#3958)
2e41d36
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
licenses:
- cc-by-sa-3-0
- gfdl-1-3-or-later
task_categories:
- sequence-modeling
task_ids:
- language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
languages:
20220301-aa:
- aa
20220301-ab:
- ab
20220301-ace:
- ace
20220301-ady:
- unknown
20220301-af:
- af
20220301-ak:
- ak
20220301-als:
- als
20220301-am:
- am
20220301-an:
- an
20220301-ang:
- ang
20220301-ar:
- ar
20220301-arc:
- arc
20220301-arz:
- arz
20220301-as:
- as
20220301-ast:
- ast
20220301-atj:
- atj
20220301-av:
- av
20220301-ay:
- ay
20220301-az:
- az
20220301-azb:
- azb
20220301-ba:
- ba
20220301-bar:
- bar
20220301-bat-smg:
- sgs
20220301-bcl:
- bcl
20220301-be:
- be
20220301-be-x-old:
- unknown
20220301-bg:
- bg
20220301-bh:
- bh
20220301-bi:
- bi
20220301-bjn:
- bjn
20220301-bm:
- bm
20220301-bn:
- bn
20220301-bo:
- bo
20220301-bpy:
- bpy
20220301-br:
- br
20220301-bs:
- bs
20220301-bug:
- bug
20220301-bxr:
- bxr
20220301-ca:
- ca
20220301-cbk-zam:
- cbk
20220301-cdo:
- cdo
20220301-ce:
- ce
20220301-ceb:
- ceb
20220301-ch:
- ch
20220301-cho:
- cho
20220301-chr:
- chr
20220301-chy:
- chy
20220301-ckb:
- ckb
20220301-co:
- co
20220301-cr:
- cr
20220301-crh:
- crh
20220301-cs:
- cs
20220301-csb:
- csb
20220301-cu:
- cu
20220301-cv:
- cv
20220301-cy:
- cy
20220301-da:
- da
20220301-de:
- de
20220301-din:
- din
20220301-diq:
- diq
20220301-dsb:
- dsb
20220301-dty:
- dty
20220301-dv:
- dv
20220301-dz:
- dz
20220301-ee:
- ee
20220301-el:
- el
20220301-eml:
- eml
20220301-en:
- en
20220301-eo:
- eo
20220301-es:
- es
20220301-et:
- et
20220301-eu:
- eu
20220301-ext:
- ext
20220301-fa:
- fa
20220301-ff:
- ff
20220301-fi:
- fi
20220301-fiu-vro:
- vro
20220301-fj:
- fj
20220301-fo:
- fo
20220301-fr:
- fr
20220301-frp:
- frp
20220301-frr:
- frr
20220301-fur:
- fur
20220301-fy:
- fy
20220301-ga:
- ga
20220301-gag:
- gag
20220301-gan:
- gan
20220301-gd:
- gd
20220301-gl:
- gl
20220301-glk:
- glk
20220301-gn:
- gn
20220301-gom:
- gom
20220301-gor:
- gor
20220301-got:
- got
20220301-gu:
- gu
20220301-gv:
- gv
20220301-ha:
- ha
20220301-hak:
- hak
20220301-haw:
- haw
20220301-he:
- he
20220301-hi:
- hi
20220301-hif:
- hif
20220301-ho:
- ho
20220301-hr:
- hr
20220301-hsb:
- hsb
20220301-ht:
- ht
20220301-hu:
- hu
20220301-hy:
- hy
20220301-ia:
- ia
20220301-id:
- id
20220301-ie:
- ie
20220301-ig:
- ig
20220301-ii:
- ii
20220301-ik:
- ik
20220301-ilo:
- ilo
20220301-inh:
- inh
20220301-io:
- io
20220301-is:
- is
20220301-it:
- it
20220301-iu:
- iu
20220301-ja:
- ja
20220301-jam:
- jam
20220301-jbo:
- jbo
20220301-jv:
- jv
20220301-ka:
- ka
20220301-kaa:
- kaa
20220301-kab:
- kab
20220301-kbd:
- kbd
20220301-kbp:
- kbp
20220301-kg:
- kg
20220301-ki:
- ki
20220301-kj:
- kj
20220301-kk:
- kk
20220301-kl:
- kl
20220301-km:
- km
20220301-kn:
- kn
20220301-ko:
- ko
20220301-koi:
- koi
20220301-krc:
- krc
20220301-ks:
- ks
20220301-ksh:
- ksh
20220301-ku:
- ku
20220301-kv:
- kv
20220301-kw:
- kw
20220301-ky:
- ky
20220301-la:
- la
20220301-lad:
- lad
20220301-lb:
- lb
20220301-lbe:
- lbe
20220301-lez:
- lez
20220301-lfn:
- lfn
20220301-lg:
- lg
20220301-li:
- li
20220301-lij:
- lij
20220301-lmo:
- lmo
20220301-ln:
- ln
20220301-lo:
- lo
20220301-lrc:
- lrc
20220301-lt:
- lt
20220301-ltg:
- ltg
20220301-lv:
- lv
20220301-mai:
- mai
20220301-map-bms:
- unknown
20220301-mdf:
- mdf
20220301-mg:
- mg
20220301-mh:
- mh
20220301-mhr:
- mhr
20220301-mi:
- mi
20220301-min:
- min
20220301-mk:
- mk
20220301-ml:
- ml
20220301-mn:
- mn
20220301-mr:
- mr
20220301-mrj:
- mrj
20220301-ms:
- ms
20220301-mt:
- mt
20220301-mus:
- mus
20220301-mwl:
- mwl
20220301-my:
- my
20220301-myv:
- myv
20220301-mzn:
- mzn
20220301-na:
- na
20220301-nah:
- nah
20220301-nap:
- nap
20220301-nds:
- nds
20220301-nds-nl:
- nds-nl
20220301-ne:
- ne
20220301-new:
- new
20220301-ng:
- ng
20220301-nl:
- nl
20220301-nn:
- nn
20220301-no:
- "no"
20220301-nov:
- nov
20220301-nrm:
- nrf
20220301-nso:
- nso
20220301-nv:
- nv
20220301-ny:
- ny
20220301-oc:
- oc
20220301-olo:
- olo
20220301-om:
- om
20220301-or:
- or
20220301-os:
- os
20220301-pa:
- pa
20220301-pag:
- pag
20220301-pam:
- pam
20220301-pap:
- pap
20220301-pcd:
- pcd
20220301-pdc:
- pdc
20220301-pfl:
- pfl
20220301-pi:
- pi
20220301-pih:
- pih
20220301-pl:
- pl
20220301-pms:
- pms
20220301-pnb:
- pnb
20220301-pnt:
- pnt
20220301-ps:
- ps
20220301-pt:
- pt
20220301-qu:
- qu
20220301-rm:
- rm
20220301-rmy:
- rmy
20220301-rn:
- rn
20220301-ro:
- ro
20220301-roa-rup:
- rup
20220301-roa-tara:
- unknown
20220301-ru:
- ru
20220301-rue:
- rue
20220301-rw:
- rw
20220301-sa:
- sa
20220301-sah:
- sah
20220301-sat:
- sat
20220301-sc:
- sc
20220301-scn:
- scn
20220301-sco:
- sco
20220301-sd:
- sd
20220301-se:
- se
20220301-sg:
- sg
20220301-sh:
- sh
20220301-si:
- si
20220301-simple:
- simple
20220301-sk:
- sk
20220301-sl:
- sl
20220301-sm:
- sm
20220301-sn:
- sn
20220301-so:
- so
20220301-sq:
- sq
20220301-sr:
- sr
20220301-srn:
- srn
20220301-ss:
- ss
20220301-st:
- st
20220301-stq:
- stq
20220301-su:
- su
20220301-sv:
- sv
20220301-sw:
- sw
20220301-szl:
- szl
20220301-ta:
- ta
20220301-tcy:
- tcy
20220301-te:
- te
20220301-tet:
- tdt
20220301-tg:
- tg
20220301-th:
- th
20220301-ti:
- ti
20220301-tk:
- tk
20220301-tl:
- tl
20220301-tn:
- tn
20220301-to:
- to
20220301-tpi:
- tpi
20220301-tr:
- tr
20220301-ts:
- ts
20220301-tt:
- tt
20220301-tum:
- tum
20220301-tw:
- tw
20220301-ty:
- ty
20220301-tyv:
- tyv
20220301-udm:
- udm
20220301-ug:
- ug
20220301-uk:
- uk
20220301-ur:
- ur
20220301-uz:
- uz
20220301-ve:
- ve
20220301-vec:
- vec
20220301-vep:
- vep
20220301-vi:
- vi
20220301-vls:
- vls
20220301-vo:
- vo
20220301-wa:
- wa
20220301-war:
- war
20220301-wo:
- wo
20220301-wuu:
- wuu
20220301-xal:
- xal
20220301-xh:
- xh
20220301-xmf:
- xmf
20220301-yi:
- yi
20220301-yo:
- yo
20220301-za:
- za
20220301-zea:
- zea
20220301-zh:
- zh
20220301-zh-classical:
- lzh
20220301-zh-min-nan:
- nan
20220301-zh-yue:
- yue
20220301-zu:
- zu
---
# Dataset Card for Wikipedia
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ``mwparserfromhell`` tool.
To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first:
```
pip install apache_beam mwparserfromhell
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
load_dataset("wikipedia", language="sw", date="20220120", beam_runner=...)
```
where you can pass as `beam_runner` any Apache Beam supported runner for (distributed) data processing
(see [here](https://beam.apache.org/documentation/runners/capability-matrix/)).
Pass "DirectRunner" to run it on your machine.
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:
```python
from datasets import load_dataset
load_dataset("wikipedia", "20220301.en")
```
The list of pre-processed subsets is:
- "20220301.de"
- "20220301.en"
- "20220301.fr"
- "20220301.frr"
- "20220301.it"
- "20220301.simple"
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
Some subsets of Wikipedia have already been processed by HuggingFace, as you can see below:
#### 20220301.de
- **Size of downloaded dataset files:** 6523.22 MB
- **Size of the generated dataset:** 8905.28 MB
- **Total amount of disk used:** 15428.50 MB
#### 20220301.en
- **Size of downloaded dataset files:** 20598.31 MB
- **Size of the generated dataset:** 20275.52 MB
- **Total amount of disk used:** 40873.83 MB
#### 20220301.fr
- **Size of downloaded dataset files:** 5602.57 MB
- **Size of the generated dataset:** 7375.92 MB
- **Total amount of disk used:** 12978.49 MB
#### 20220301.frr
- **Size of downloaded dataset files:** 12.44 MB
- **Size of the generated dataset:** 9.13 MB
- **Total amount of disk used:** 21.57 MB
#### 20220301.it
- **Size of downloaded dataset files:** 3516.44 MB
- **Size of the generated dataset:** 4539.94 MB
- **Total amount of disk used:** 8056.39 MB
#### 20220301.simple
- **Size of downloaded dataset files:** 239.68 MB
- **Size of the generated dataset:** 235.07 MB
- **Total amount of disk used:** 474.76 MB
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Data Splits
Here are the number of examples for several configurations:
| name | train |
|-----------------|--------:|
| 20220301.de | 2665357 |
| 20220301.en | 6458670 |
| 20220301.fr | 2402095 |
| 20220301.frr | 15199 |
| 20220301.it | 1743035 |
| 20220301.simple | 205328 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.