Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:

Downloading a small portion of dataset

#4
by thurac2022 - opened

Is there a way to download just small portion of dataset, for example, per language and/or per programming task (data science, etc.)?

BigCode org
edited Mar 1

Hi @thurac2022 ! Because this version of The Stack is grouped by repository (to train with repo context), filtering by language is only possible on the fly, e.g.:

ds = load_dataset("bigcode/the-stack-v2-train-smol-ids", streaming=True, split="train")

# In a loop:
for sample in iter(ds): 
    for file in sample["files"]:
        if file["language"]:
            ...

# Or with .map():
ds = ds.map(your_language_filtering_function)

With streaming=True you don't actually download the whole dataset locally.

Also, the bigger datasets like https://huggingface.co/datasets/bigcode/the-stack-v2-dedup have a flat flat structure, so you can do:

# specific language (e.g. Dockerfiles)
ds = load_dataset("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", split="train")

@anton-l I get the exception when use the aws download script:

Traceback (most recent call last):
  File "/data_script/data_tmp/starcoder_data_download.py", line 21, in <module>
    for row in ds:
  File "/usr/local/python/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__
    for key, example in ex_iterable:
  File "/usr/local/python/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 679, in __iter__
    yield from self._iter()
  File "/usr/local/python/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 752, in _iter
    transformed_example.update(self.function(*function_args, **self.fn_kwargs))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data_script/data_tmp/starcoder_data_download.py", line 20, in <lambda>
    ds = ds.map(lambda row: download_contents(row["blob_id"], row["src_encoding"]))
                                              ~~~^^^^^^^^^^^
KeyError: 'blob_id'

Sign up or log in to comment