Datasets:
task_categories:
- robotics
language:
- en
tags:
- RDT
- rdt
- RDT 2
- manipulation
- bimanual
- ur5e
- webdatset
- vision-language-action
license: apache-2.0
Dataset Summary
This dataset provides shards in the WebDataset format for fine-tuning RDT-2 or other policy models on bimanual manipulation. Each sample packs:
- a binocular RGB image (left + right wrist cameras concatenated horizontally)
- a relative action chunk (continuous control, 0.8s, 30Hz)
- a discrete action token sequence (e.g., from an Residual VQ action tokenizer)
- a metadata JSON with an instruction key
sub_task_instruction_key
to index corresponding instruction frominstructions.json
Data were collected on a bimanual UR5e setup.
Supported Tasks
- Instruction-conditioned bimanual manipulation, including:
- Pouring water: different water bottles and cups
- Cleaning the desktop: different dustpans and paper balls
- Folding towels: towels of different sizes and colors
- Stacking cups: cups of different sizes and colors
Data Structure
Shard layout
Shards are named shard-*.tar
. Inside each shard:
shard-000000.tar
βββ 0.image.jpg # binocular RGB, H=384, W=768, C=3, uint8
βββ 0.action.npy # relative actions, shape (24, 20), float32
βββ 0.action_token.npy # action tokens, shape (27,), int16 β [0, 1024)
βββ 0.meta.json # metadata; includes "sub_task_instruction_key"
βββ 1.image.jpg
βββ 1.action.npy
βββ 1.action_token.npy
βββ 1.meta.json
βββ ...
shard-000001.tar
shard-000002.tar
...
Image: binocular wrist cameras concatenated horizontally β
np.ndarray
of shape(384, 768, 3)
withdtype=uint8
(stored as JPEG).Action (continuous):
np.ndarray
of shape(24, 20)
,dtype=float32
(24-step chunk, 20-D control).Action tokens (discrete):
np.ndarray
of shape(27,)
,dtype=int16
, values in[0, 1024]
.Metadata:
meta.json
contains at leastsub_task_instruction_key
pointing to an entry in top-levelinstructions.json
.
Example Data Instance
{
"image": "0.image.jpg",
"action": "0.action.npy",
"action_token": "0.action_token.npy",
"meta": {
"sub_task_instruction_key": "fold_cloth_step_3"
}
}
How to Use
1) Official Guidelines to fine-tune RDT 2 series
Use the example scripts and guidelines:
2) Minimal Loading example
import os
import glob
import json
import random
import webdataset as wds
def no_split(src):
yield from src
def get_train_dataset(shards_dir):
shards = sorted(glob.glob(os.path.join(shards_dir, "shard-*.tar")))
random.shuffle(shards)
num_workers = wds.utils.pytorch_worker_info()[-1]
workersplitter = wds.split_by_worker if len(shards) > num_workers else no_split
assert shards, f"No shards under {shards_dir}"
dataset = (
wds.WebDataset(
shards,
shardshuffle=False,
nodesplitter=no_split,
workersplitter=workersplitter,
resampled=True,
)
.repeat()
.shuffle(8192, initial=8192)
.decode("pil")
.map(
lambda sample: {
"image": sample["image.jpg"],
"action_token": sample["action_token.npy"],
"meta": sample["meta.json"],
}
)
.with_epoch(nsamples=(2048 * 30 * 60 * 60)) # 2048 hours
)
return dataset
with open(os.path.join("<Dataset Diretory>", "instructions.json") as fp:
instructions = json.load(fp)
dataset = get_train_dataset(os.path.join("<Dataset Diretory>", "shards"))
Ethical Considerations
- Contains robot teleoperation/automation data. No PII is present by design.
- Ensure safe deployment/testing on real robots; follow lab safety and manufacturer guidelines.
Citation
If you use this dataset, please cite the dataset and your project appropriately. For example:
@software{rdt2,
title={RDT2: Enabling Zero-Shot Cross-Embodiment Generalization by Scaling Up UMI Data},
author={RDT Team},
url={https://github.com/thu-ml/RDT2},
month={September},
year={2025}
}
License
- Dataset license: Apache-2.0 (unless otherwise noted by the maintainers of your fork/release).
- Ensure compliance when redistributing derived data or models.
Maintainers & Contributions
We welcome fixes and improvements to the conversion scripts and docs (see https://github.com/thu-ml/RDT2/tree/main#troubleshooting). Please open issues/PRs with:
- OS + Python versions
- Minimal repro code
- Error tracebacks
- Any other helpful context