The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
prompt: string
negative_prompt: string
guidance_scale: double
selected_index: int64
timestamp: string
image_000: struct<bytes: binary, path: string>
  child 0, bytes: binary
  child 1, path: string
image_001: struct<bytes: binary, path: string>
  child 0, bytes: binary
  child 1, path: string
image_002: struct<bytes: binary, path: string>
  child 0, bytes: binary
  child 1, path: string
-- schema metadata --
huggingface: '{"info": {"features": {"prompt": {"_type": "Value", "dtype"' + 337
to
{'prompt': Value(dtype='string', id=None), 'negative_prompt': Value(dtype='string', id=None), 'guidance_scale': Value(dtype='float64', id=None), 'selected_index': Value(dtype='int64', id=None), 'timestamp': Value(dtype='string', id=None), 'image_000': Image(decode=True, id=None), 'image_001': Image(decode=True, id=None), 'image_002': Image(decode=True, id=None), 'image_003': Image(decode=True, id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 263, in query
                  pa_table = pa.concat_tables(
                File "pyarrow/table.pxi", line 5245, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              prompt: string
              negative_prompt: string
              guidance_scale: double
              selected_index: int64
              timestamp: string
              image_000: struct<bytes: binary, path: string>
              image_001: struct<bytes: binary, path: string>
              image_002: struct<bytes: binary, path: string>
              image_003: struct<bytes: binary, path: string>
              vs
              prompt: string
              negative_prompt: string
              guidance_scale: double
              selected_index: int64
              timestamp: string
              image_000: struct<bytes: binary, path: string>
              image_001: struct<bytes: binary, path: string>
              image_002: struct<bytes: binary, path: string>
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 101, in get_rows_content
                  pa_table = rows_index.query(offset=0, length=rows_max_number)
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 412, in query
                  return self.parquet_index.query(offset=offset, length=length)
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 270, in query
                  raise SchemaMismatchError("Parquet files have different schema.", err)
              libcommon.parquet_utils.SchemaMismatchError: ('Parquet files have different schema.', ArrowInvalid('Schema at index 1 was different: \nprompt: string\nnegative_prompt: string\nguidance_scale: double\nselected_index: int64\ntimestamp: string\nimage_000: struct<bytes: binary, path: string>\nimage_001: struct<bytes: binary, path: string>\nimage_002: struct<bytes: binary, path: string>\nimage_003: struct<bytes: binary, path: string>\nvs\nprompt: string\nnegative_prompt: string\nguidance_scale: double\nselected_index: int64\ntimestamp: string\nimage_000: struct<bytes: binary, path: string>\nimage_001: struct<bytes: binary, path: string>\nimage_002: struct<bytes: binary, path: string>'))
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 113, in compute_first_rows_from_parquet_response
                  return create_first_rows_response(
                File "/src/libs/libcommon/src/libcommon/viewer_utils/rows.py", line 134, in create_first_rows_response
                  rows_content = get_rows_content(rows_max_number)
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 108, in get_rows_content
                  raise SplitParquetSchemaMismatchError(
              libcommon.exceptions.SplitParquetSchemaMismatchError: Split parquet files being processed have different schemas. Ensure all files have identical column names.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 126, in get_rows_or_raise
                  return get_rows(
                File "/src/services/worker/src/worker/utils.py", line 64, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 103, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 94, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 74, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2194, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              prompt: string
              negative_prompt: string
              guidance_scale: double
              selected_index: int64
              timestamp: string
              image_000: struct<bytes: binary, path: string>
                child 0, bytes: binary
                child 1, path: string
              image_001: struct<bytes: binary, path: string>
                child 0, bytes: binary
                child 1, path: string
              image_002: struct<bytes: binary, path: string>
                child 0, bytes: binary
                child 1, path: string
              -- schema metadata --
              huggingface: '{"info": {"features": {"prompt": {"_type": "Value", "dtype"' + 337
              to
              {'prompt': Value(dtype='string', id=None), 'negative_prompt': Value(dtype='string', id=None), 'guidance_scale': Value(dtype='float64', id=None), 'selected_index': Value(dtype='int64', id=None), 'timestamp': Value(dtype='string', id=None), 'image_000': Image(decode=True, id=None), 'image_001': Image(decode=True, id=None), 'image_002': Image(decode=True, id=None), 'image_003': Image(decode=True, id=None)}
              because column names don't match

Need help to make the dataset viewer work? Open a discussion for direct support.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card