Cannot load ds

#1
by Muennighoff - opened

!pip install -q datasets
from datasets import load_dataset
ds = load_dataset("stanfordnlp/SHP-2", split="train")

CastError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1988 try:
-> 1989 writer.write_table(table)
1990 except CastError as cast_error:

8 frames
CastError: Couldn't cast
id: string
post_id: int64
domain: string
history: string
created_at_utc_A: double
created_at_utc_B: double
score_A: int64
score_B: int64
human_ref_A: string
human_ref_B: string
labels: int64
metadata_A: string
metadata_B: string
seconds_difference: double
score_ratio: double
to
{'post_id': Value(dtype='string', id=None), 'domain': Value(dtype='string', id=None), 'upvote_ratio': Value(dtype='float64', id=None), 'history': Value(dtype='string', id=None), 'c_root_id_A': Value(dtype='string', id=None), 'c_root_id_B': Value(dtype='string', id=None), 'created_at_utc_A': Value(dtype='int64', id=None), 'created_at_utc_B': Value(dtype='int64', id=None), 'score_A': Value(dtype='int64', id=None), 'score_B': Value(dtype='int64', id=None), 'human_ref_A': Value(dtype='string', id=None), 'human_ref_B': Value(dtype='string', id=None), 'labels': Value(dtype='int64', id=None), 'seconds_difference': Value(dtype='float64', id=None), 'score_ratio': Value(dtype='float64', id=None)}
because column names don't match

During handling of the above exception, another exception occurred:

DatasetGenerationCastError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1989 writer.write_table(table)
1990 except CastError as cast_error:
-> 1991 raise DatasetGenerationCastError.from_cast_error(
1992 cast_error=cast_error,
1993 builder_name=self.info.builder_name,

DatasetGenerationCastError: An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns (metadata_A, id, metadata_B) and 3 missing columns (c_root_id_A, upvote_ratio, c_root_id_B).

This happened while the json dataset builder was generating data using

hf://datasets/stanfordnlp/SHP-2/stackexchange/stack_academia/train.json (at revision aeb14be2c479ff704697d7e3d794fbf2537fdbed)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Stanford NLP org

The discrepancy between column names for reddit and stackexchange should now be fixed.
Thank you for pointing this out!

heidi-zhang changed discussion status to closed

Sign up or log in to comment