wiki40b-da / README.md
saattrupdan's picture
Update README.md
f96a1cc
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: wikidata_id
      dtype: string
    - name: text
      dtype: string
    - name: version_id
      dtype: string
  splits:
    - name: train
      num_bytes: 220855898
      num_examples: 109486
    - name: validation
      num_bytes: 12416304
      num_examples: 6173
    - name: test
      num_bytes: 12818380
      num_examples: 6219
  download_size: 150569852
  dataset_size: 246090582
license: cc-by-sa-4.0
task_categories:
  - text-generation
language:
  - da
pretty_name: Wiki40b-da
size_categories:
  - 100K<n<1M

Dataset Card for "wiki40b-da"

Dataset Description

  • Point of Contact: Dan Saattrup Nielsen
  • Size of downloaded dataset files: 150.57 MB
  • Size of the generated dataset: 246.09 MB
  • Total amount of disk used: 396.66 MB

Dataset Summary

This dataset is an upload of the Danish part of the Wiki40b dataset, being a cleaned version of a dump of Wikipedia.

The dataset is identical in content to this dataset on the Hugging Face Hub, but that one requires both apache_beam, tensorflow and mwparserfromhell, which can lead to dependency issues since these are not compatible with several newer packages.

The training, validation and test splits are the original ones.

Languages

The dataset is available in Danish (da).

Dataset Structure

Data Instances

  • Size of downloaded dataset files: 150.57 MB
  • Size of the generated dataset: 246.09 MB
  • Total amount of disk used: 396.66 MB

An example from the dataset looks as follows.

{
 'wikidata_id': 'Q17341862',
 'text': "\n_START_ARTICLE_\nÆgyptiske tekstiler\n_START_PARAGRAPH_\nTekstiler havde mange (...)",
 'version_id': '9018011197452276273'
}

Data Fields

The data fields are the same among all splits.

  • wikidata_id: a string feature.
  • text: a string feature.
  • version_id: a string feature.

Dataset Statistics

There are 109,486 samples in the training split, 6,173 samples in the validation split and 6,219 in the test split.

Document Length Distribution

image/png

Additional Information

Dataset Curators

Dan Saattrup Nielsen from the The Alexandra Institute uploaded it to the Hugging Face Hub.

Licensing Information

The dataset is licensed under the CC-BY-SA license.