wiki40b / README.md
albertvillanova's picture
Convert dataset sizes from base 2 to base 10 in the dataset card (#3)
1125123
|
raw
history blame
5.87 kB
metadata
language:
  - en
paperswithcode_id: wiki-40b
pretty_name: Wiki-40B
dataset_info:
  features:
    - name: wikidata_id
      dtype: string
    - name: text
      dtype: string
    - name: version_id
      dtype: string
  config_name: en
  splits:
    - name: train
      num_bytes: 9423623904
      num_examples: 2926536
    - name: validation
      num_bytes: 527383016
      num_examples: 163597
    - name: test
      num_bytes: 522219464
      num_examples: 162274
  download_size: 0
  dataset_size: 10473226384

Dataset Card for "wiki40b"

Table of Contents

Dataset Description

Dataset Summary

Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

en

  • Size of downloaded dataset files: 0.00 MB
  • Size of the generated dataset: 10.47 GB
  • Total amount of disk used: 10.47 GB

An example of 'train' looks as follows.


Data Fields

The data fields are the same among all splits.

en

  • wikidata_id: a string feature.
  • text: a string feature.
  • version_id: a string feature.

Data Splits

name train validation test
en 2926536 163597 162274

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information


Contributions

Thanks to @jplu, @patrickvonplaten, @thomwolf, @albertvillanova, @lhoestq for adding this dataset.