Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'lr', 'clip_version'}) and 5 missing columns ({'channel_size', 'use_dropout', 'network', 'optim', 'dropout_prob'}).

This happened while the json dataset builder was generating data using

hf://datasets/QuangNguyen22/PromptGD/prompt_gd/240429_1300_/commandline_args.json (at revision 7816e2761ab5a9241b1e138a3b496010c8048c80)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              clip_version: string
              input_size: int64
              use_depth: int64
              use_rgb: int64
              iou_threshold: double
              dataset: string
              dataset_path: string
              split: double
              ds_shuffle: bool
              ds_rotate: double
              num_workers: int64
              batch_size: int64
              epochs: int64
              batches_per_epoch: int64
              lr: double
              lr_step_size: int64
              description: string
              logdir: string
              vis: bool
              force_cpu: bool
              random_seed: int64
              seen: int64
              to
              {'network': Value(dtype='string', id=None), 'input_size': Value(dtype='int64', id=None), 'use_depth': Value(dtype='int64', id=None), 'use_rgb': Value(dtype='int64', id=None), 'use_dropout': Value(dtype='int64', id=None), 'dropout_prob': Value(dtype='float64', id=None), 'channel_size': Value(dtype='int64', id=None), 'iou_threshold': Value(dtype='float64', id=None), 'dataset': Value(dtype='string', id=None), 'dataset_path': Value(dtype='string', id=None), 'split': Value(dtype='float64', id=None), 'ds_shuffle': Value(dtype='bool', id=None), 'ds_rotate': Value(dtype='float64', id=None), 'num_workers': Value(dtype='int64', id=None), 'batch_size': Value(dtype='int64', id=None), 'epochs': Value(dtype='int64', id=None), 'batches_per_epoch': Value(dtype='int64', id=None), 'optim': Value(dtype='string', id=None), 'lr_step_size': Value(dtype='int64', id=None), 'description': Value(dtype='string', id=None), 'logdir': Value(dtype='string', id=None), 'vis': Value(dtype='bool', id=None), 'force_cpu': Value(dtype='bool', id=None), 'random_seed': Value(dtype='int64', id=None), 'seen': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'lr', 'clip_version'}) and 5 missing columns ({'channel_size', 'use_dropout', 'network', 'optim', 'dropout_prob'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/QuangNguyen22/PromptGD/prompt_gd/240429_1300_/commandline_args.json (at revision 7816e2761ab5a9241b1e138a3b496010c8048c80)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

network
string
input_size
int64
use_depth
int64
use_rgb
int64
use_dropout
int64
dropout_prob
float64
channel_size
int64
iou_threshold
float64
dataset
string
dataset_path
string
split
float64
ds_shuffle
bool
ds_rotate
float64
num_workers
int64
batch_size
int64
epochs
int64
batches_per_epoch
int64
optim
string
lr_step_size
int64
description
string
logdir
string
vis
bool
force_cpu
bool
random_seed
int64
seen
int64
clip_version
string
lr
float64
grconvnet3
224
0
1
1
0.1
32
0.25
grasp-anything
/home/bdi/AdvancedLiterateMachinery/DocumentUnderstanding/CLIP_OCR/Dataset/grasp-anything++/seen
0.9
false
0
8
8
50
1,000
adam
10
logs/
false
false
123
1
null
null
grconvnet3
224
0
1
1
0.1
32
0.25
grasp-anything
/home/bdi/AdvancedLiterateMachinery/DocumentUnderstanding/CLIP_OCR/Dataset/grasp-anything++/seen
0.9
false
0
8
4
100
600
adam
10
logs/
false
false
123
1
null
null
null
224
0
1
null
null
null
0.25
grasp-anything
/home/bdi/AdvancedLiterateMachinery/DocumentUnderstanding/CLIP_OCR/Dataset/grasp-anything++/seen
0.9
false
0
8
4
150
600
null
5
logs/ld_grasp
false
false
123
1
ViT-B/32
0.003
null
224
0
1
null
null
null
0.25
grasp-anything
/home/bdi/AdvancedLiterateMachinery/DocumentUnderstanding/CLIP_OCR/Dataset/grasp-anything++/seen
0.9
false
0
8
4
100
600
null
10
logs/prompt_gd
false
false
123
1
ViT-B/32
0.003
null
224
0
1
null
null
null
0.25
grasp-anything
/home/bdi/AdvancedLiterateMachinery/DocumentUnderstanding/CLIP_OCR/Dataset/grasp-anything++/seen
0.9
false
0
8
8
100
300
null
5
logs/prompt_gd
false
false
123
1
ViT-B/32
0.003
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card