The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    UnidentifiedImageError
Message:      cannot identify image file <_io.BytesIO object at 0x7f48557fa9f0>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 539, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 92, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 69, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1393, in __iter__
                  example = _apply_feature_types_on_example(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1082, in _apply_feature_types_on_example
                  decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1977, in decode_example
                  return {
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1978, in <dictcomp>
                  column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1343, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 185, in decode_example
                  image = PIL.Image.open(bytes_)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/Image.py", line 3339, in open
                  raise UnidentifiedImageError(msg)
              PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f48557fa9f0>

Need help to make the dataset viewer work? Open a discussion for direct support.

Tokenized (Llama 3) verison of NousResearch/dolma-v1_7-305B as a Nanotron dataset split into 10 GB chunks.

To download:

huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-305B-tokenized-llama3-nanoset --local-dir-use-symlinks False NousResearch/dolma-v1_7-305B-tokenized-llama3-nanoset

To recombine:

cat dolma-v1_7-305B-tokenized-llama3-nanoset/dolma-v1_7-305B-tokenized-llama3-nanoset.npy.* > dolma-v1_7-305B-tokenized-llama3-nanoset.npy
rm -rf dolma-v1_7-305B-tokenized-llama3-nanoset

Can also be used directly with numpy, for example

import numpy as np

dataset_buffer_mmap = np.memmap("dolma-v1_7-305B-tokenized-llama3-nanoset.npy",
  mode="r", order="C", dtype=np.int32)
dataset_buffer = memoryview(dataset_buffer_mmap)
dataset_number_of_tokens = int(len(dataset_buffer))
Downloads last month
0

Collection including NousResearch/dolma-v1_7-305B-tokenized-llama3-nanoset