This dataset is broken!

#5
by j3m - opened

Running this script results in an error:

from datasets import load_dataset

mbpp = load_dataset("mbpp")
Failed to read file '/home/me/.cache/huggingface/datasets/downloads/68316d7fb2c761eb5457df5676850c869a21a7a7b97e924b5cb973721175a81a' with error <class 'ValueError'>: Couldn't cast
source_file: string
task_id: int32
prompt: string
code: string
test_imports: list<item: string>
  child 0, item: string
test_list: list<item: string>
  child 0, item: string
-- schema metadata --
huggingface: '{"info": {"features": {"source_file": {"dtype": "string", "' + 339
to
{'task_id': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), 'code': Value(dtype='string', id=None), 'test_list': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'test_setup_code': Value(dtype='string', id=None), 'challenge_test_list': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
Traceback (most recent call last):                                                                                                                                                                    
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
    for _, table in generator:
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/packaged_modules/parquet/parquet.py", line 82, in _generate_tables
    yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/packaged_modules/parquet/parquet.py", line 61, in _cast_table
    pa_table = table_cast(pa_table, self.info.features.arrow_schema)
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/table.py", line 2324, in table_cast
    return cast_table_to_schema(table, schema)
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/table.py", line 2282, in cast_table_to_schema
    raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
source_file: string
task_id: int32
prompt: string
code: string
test_imports: list<item: string>
  child 0, item: string
test_list: list<item: string>
  child 0, item: string
-- schema metadata --
huggingface: '{"info": {"features": {"source_file": {"dtype": "string", "' + 339
to
{'task_id': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), 'code': Value(dtype='string', id=None), 'test_list': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'test_setup_code': Value(dtype='string', id=None), 'challenge_test_list': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/me/code/work/scripts/24-03-04@11:44:40.py", line 3, in <module>
    mbpp = load_dataset("mbpp")
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset
    storage_options=storage_options,
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare
    **download_and_prepare_kwargs,
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
    self._prepare_split(split_generator, **prepare_split_kwargs)
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split
    gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
  File "/home/me/code/work/pyenv/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single
    raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
Datasets Maintainers org

You have to update your datasets library:

pip install -U datasets
albertvillanova changed discussion status to closed

Hi @albertvillanova ,

Thank you for the suggestion to update the datasets library. However, I also encountered the same issue that seems to stem from the end of support for Python 3.7 by the datasets library. Since the legacy evaluations for the bigcode-project/bigcode-evaluation-harness on the MBPP dataset were performed using Python 3.7, this presents a problem for those trying to reproduce the results.

Sign up or log in to comment