The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: array() takes at least 1 positional argument (0 given)
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 612, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 396, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2138, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2134, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2123, in from_yaml_inner
Value(obj["dtype"])
File "<string>", line 5, in __init__
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 552, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 153, in string_to_arrow
return pa.__dict__[datasets_dtype]()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 129, in pyarrow.lib.array
TypeError: array() takes at least 1 positional argument (0 given)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EgoSPT
EgoSPT is an egocentric manipulation trajectory dataset collected for vision-conditioned trajectory prediction. Each episode contains an RGB video, time-aligned end-effector poses, gripper widths, and valid-frame masks.
Dataset Summary
- Scenes:
scene1,scene2,scene3 - Tasks: 112 task folders
- Episodes: 11,515 processed episode folders
- Size: about 37 GB
- Main modality: egocentric RGB video
- Action target: future end-effector trajectory with gripper width
Directory Structure
EgoSPT/
βββ scene1/
βββ scene2/
βββ scene3/
βββ <task_name>/
βββ recording_output_processed/
βββ episode_<id>/
βββ camera_1.mp4
βββ pose_interp
βββ gripper_widths
βββ valid_indices
Each task name follows the pattern:
put_fork<id>_to_<target><id>
where targets include bowls, cups, and plates.
Episode Contents
Each processed episode contains:
| File | Description |
|---|---|
camera_1.mp4 |
egocentric RGB video |
pose_interp |
time-aligned end-effector pose sequence, stored as zarr array |
gripper_widths |
gripper width sequence, stored as zarr array |
valid_indices |
boolean valid-frame mask, stored as zarr array |
The pose trajectory is represented as homogeneous SE(3) transforms. Downstream code converts these poses into relative actions:
[dx, dy, dz, rot6d_0..5, gripper_width]
Usage With umi_day.vision_traj
Place the dataset under:
umi_day/EgoSPT
Then train with:
python -m umi_day.vision_traj.train \
data.root=umi_day/EgoSPT \
data.annotations_json=annotations_merged.json
The vision_traj loader expects an annotation JSON that provides object and
target bounding boxes on the first frame of each episode.
Notes
- This dataset is intended for robotics research on egocentric perception, object-conditioned manipulation, and trajectory prediction.
- The processed episode folders are directly readable by
umi_day/vision_traj/dataset.py. - See
umi_day/vision_traj/README.mdfor model training and evaluation commands.
- Downloads last month
- 2,140