--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 16824296700 num_examples: 4103487 - name: test num_bytes: 885489300 num_examples: 215973 download_size: 8311975924 dataset_size: 17709786000 --- # Dataset Card for "large_spanish_corpus_ds_tokenized_and_gropuped" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)