Dolma
Collection
allenai's Dolma dataset as native Hugging Face datasets
•
12 items
•
Updated
•
1
Error code: ConfigNamesError Exception: DataFilesNotFoundError Message: No (supported) data files found in NousResearch/dolma-v1_7-30B-tokenized-llama2-nanoset Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 73, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1904, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1270, in get_module module_name, default_builder_kwargs = infer_module_for_data_files( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 597, in infer_module_for_data_files raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in NousResearch/dolma-v1_7-30B-tokenized-llama2-nanoset
Need help to make the dataset viewer work? Open a discussion for direct support.
Tokenized (Llama 2) verison of NousResearch/dolma-v1_7-30B as a Nanotron dataset split into 10 GB chunks.
To download:
huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-30B-tokenized-llama2-nanoset --local-dir-use-symlinks False NousResearch/dolma-v1_7-30B-tokenized-llama2-nanoset
To recombine:
cat dolma-v1_7-30B-tokenized-llama2-nanoset/dolma-v1_7-30B-tokenized-llama2-nanoset_input_ids.npy.* > dolma-v1_7-30B-tokenized-llama2-nanoset.npy
rm -rf dolma-v1_7-30B-tokenized-llama2-nanoset
Can also be used directly with numpy, for example
import numpy as np
dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-tokenized-llama2-nanoset.npy",
mode="r", order="C", dtype=np.int32)
dataset_buffer = memoryview(dataset_buffer_mmap)
dataset_number_of_tokens = int(len(dataset_buffer))