Dataset:



Dataset Card for "wiki40b"

Dataset Summary

Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.

Supported Tasks

More Information Needed

Languages

More Information Needed

Dataset Structure

We show detailed information for up to 5 configurations of the dataset.

Data Instances

en

  • Size of downloaded dataset files: 0.00 MB
  • Size of the generated dataset: 9988.05 MB
  • Total amount of disk used: 9988.05 MB

An example of 'train' looks as follows.

Data Fields

The data fields are the same among all splits.

en

  • wikidata_id: a string feature.
  • text: a string feature.
  • version_id: a string feature.

Data Splits Sample Size

name train validation test
en 2926536 163597 162274

Dataset Creation

Curation Rationale

More Information Needed

Source Data

More Information Needed

Annotations

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

Contributions

Thanks to @jplu, @patrickvonplaten, @thomwolf, @albertvillanova, @lhoestq for adding this dataset.

Update on GitHub
Explore dataset Edit Dataset Tags

Models trained or fine-tuned on wiki40b

None yet