Datasets:
metadata
language:
- en
license: mit
task_categories:
- text-generation
dataset_info:
features:
- name: pageid
dtype: int64
- name: title
dtype: string
- name: revid
dtype: int64
- name: description
dtype: string
- name: categories
sequence: string
- name: markdown
dtype: string
- name: sections
sequence: string
- name: token
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 159522657
num_examples: 18241
- name: validate
num_bytes: 28217967
num_examples: 3220
- name: test
num_bytes: 10035346
num_examples: 1130
download_size: 114465310
dataset_size: 197775970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validate
path: data/validate-*
- split: test
path: data/test-*
- modified version of Goodwiki Dataset from https://huggingface.co/datasets/euirim/goodwiki
- added section column (markdown headings), token column (computed with tiktoken)
- only contains articles with less than 3,000 token
- train, validate, test split (80, 15, 5)