--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 3601107900 num_examples: 878319 - name: test num_bytes: 187816900 num_examples: 45809 download_size: 1807775268 dataset_size: 3788924800 --- # Dataset Card for "spanish_biomedical_ds_tokenized_and_gropuped" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)