dataset_info: | |
config_name: gpt-4 | |
features: | |
- name: id | |
dtype: string | |
- name: title | |
dtype: string | |
- name: text | |
dtype: string | |
- name: token_length | |
dtype: int64 | |
- name: text_length | |
dtype: int64 | |
splits: | |
- name: train | |
num_bytes: 324653709 | |
num_examples: 15000 | |
download_size: 187754656 | |
dataset_size: 324653709 | |
configs: | |
- config_name: gpt-4 | |
data_files: | |
- split: train | |
path: gpt-4/train-* | |
# Dataset Card for "wikipedia_token" | |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |