--- language: - en dataset_info: features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 22057264844 num_examples: 6797834 download_size: 12695118248 dataset_size: 22057264844 configs: - config_name: default data_files: - split: train path: data/train-* --- Process english wikipedia dump from 20240320 Made using this repo [here](https://huggingface.co/datasets/wikipedia)