Error when load dataset

#7
by thcktw - opened

'''
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/parquet/parquet.py", line 97, in generate_tables
yield f"{file_idx}
{batch_idx}", self._cast_table(pa_table)
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/parquet/parquet.py", line 75, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2295, in table_cast
return cast_table_to_schema(table, schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
video_id: string
video_link: string
title: string
text: string
channel: string
channel_id: string
date: string
license: string
original_language: string
language_id_method: string
transcription_language: string
word_count: int64
character_count: int64
index_level_0: int64
-- schema metadata --
pandas: '{"index_columns": ["index_level_0"], "column_indexes": [{"na' + 1897
to
{'video_id': Value(dtype='string', id=None), 'video_link': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'channel': Value(dtype='string', id=None), 'channel_id': Value(dtype='string', id=None), 'date': V
alue(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'original_language': Value(dtype='string', id=None), 'language_id_method': Value(dtype='string', id=None), 'transcription_language': Value(dtype='string', id=None), 'word_count': Value(dtype='int6
4', id=None), 'character_count': Value(dtype='int64', id=None)}
because column names don't match
'''

I got this error when try to load the dataset with this code:

yt_cc = load_dataset("PleIAs/YouTube-Commons", split="train")

I'm also facing this issue.

Yep, me too.

w/ dataset = load_dataset("PleIAs/YouTube-Commons", split="train", streaming=True) might be able to get some of it and perhaps catch the exception. Will check.

I am facing same error with streaming mode while iterating at row 8,000,000 (cctube_234.parquet).

ERROR:datasets.packaged_modules.parquet.parquet:Failed to read file 'hf://datasets/PleIAs/YouTube-Commons@f3d90dacbf126fbc5ddb2a9822e4a3484c8ab68e/cctube_234.parquet' with error <class 'datasets.table.CastError'>: Couldn't cast
This comment has been hidden

Sign up or log in to comment