--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 22000087164 num_examples: 21400863 - name: validation num_bytes: 2401005024 num_examples: 2335608 download_size: 7083897934 dataset_size: 24401092188 --- # Dataset Card for "legaltokenized256" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)