--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 24408319844 num_examples: 16370815 download_size: 10890317773 dataset_size: 24408319844 --- # Dataset Card for "tokenized_enwiki" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)