Datasets:
license: apache-2.0
Dataset Card for ByteDance Robot Benchmark with 20 Tasks (BDRBench-20)
Table of Contents
Dataset Description
- Homepage: RoboVLMs, GR-2
- Repository: RoboVLMs
- Contact: kongtao@bytedance.com
Dataset Summary
ByteDance Robot Benchmark (BDRBench-20) is a vision-language-action (VLA) dataset containing 8K high-quality trajectories. It was created to evaluate the performance of VLA models in real-world scenarios. This dataset includes 20 common manipulation tasks, such as pick-and-place, pouring, and open/close actions. The dataset is designed to be used for training and evaluating VLA models in real-world scenarios.
Dataset Structure
The dataset is divided into train
and val
sets, with two subdirectories: anns
(annotations) and media
(videos). The anns
directory contains annotation files for each subtask, while the media
directory contains rollout videos for each task.
For example, to collect a trajectory for the task "pick up the cucumber from the cutting board; place the picked object in the vegetable basket", the robots will be teleoperated to perform both the pick and place subtasks consecutively to improve efficiency. The rollout processes for these two subtasks are recorded in the same video, but their annotations are stored in separate files.
The detailed file structure is listed as follows:
Dataset
βββ anns # text, video path, actions
β βββ train
β β βββ {id}.json
β β βββ ...
β βββ val
β β βββ {id}.json
β β βββ ...
βββ media # videos
β βββ train
β β βββ {id}
β β β βββ rgb.mp4
β β β βββ hand_rgb.mp4
β β βββ ...
Annotation Structure
Here, we provide a detailed explanation of the meaning of each key in the annotation JSON file (in ./anns
).
"texts": This is a list containing a single string that describes the task in English.
Example:["open the drawer"]
"videos": This is a list containing two dictionaries. The first dictionary corresponds to the video recorded by the static camera, and the second corresponds to the wrist camera. For each dictionary, the following keys are used:
video_path
: The path to the video file.start
: The starting frame of the task in the video.end
: The ending frame of the task in the video.- The first dictionary also contains an additional key,
crop
, which specifies the cropping area for the video. It is recommended to use this key to crop the video during training in order to reduce the impact of irrelevant backgrounds.
Example:
[ { "video_path": "/media/val/0_5/rgb.mp4", "crop": [[45,200], [705,1000]], "start": 0, "end": 124 }, { "video_path": "/media/val/0_5/hand_rgb.mp4", "start": 0, "end": 124 } ]
"action": This is a list recording the action at every timestep, expressed in 7 dimensions: 3 for translation (x, y, z), 3 for Euler angles (rotation), and 1 for the gripper (open/close). Note that the action represents the changes in the relative state. Therefore, when using these data, you should also use relative states. That is, the state at timestep st+1 is expressed in the coordinate system of the end effector at timestep st.
"state": Similar to "action", the state is described in 7 dimensions (3 for translation, 3 for Euler angles, and 1 for gripper open/close), but it is expressed in a global coordinate system. Since the data are collected from different machines with varying global coordinates, it is recommended to use relative states if you want to train your model and deploy it in a different environment using the state data.
Example code for calculating relative states:
# Example of how to get relative state def _get_relative_states(self, label, frame_ids): # Assume you have loaded the annotation file into 'label' # 'frame_ids' indicates the indexes of the states you want to use states = label['state'] first_id = frame_ids[0] first_xyz = np.array(states[first_id][0:3]) first_rpy = np.array(states[first_id][3:6]) first_rotm = euler2rotm(first_rpy) first_gripper = states[first_id][6] first_state = np.zeros(7, dtype=np.float32) first_state[-1] = first_gripper rel_states = [first_state] for k in range(1, len(frame_ids)): curr_frame_id = frame_ids[k] curr_xyz = np.array(states[curr_frame_id][0:3]) curr_rpy = np.array(states[curr_frame_id][3:6]) curr_rotm = euler2rotm(curr_rpy) curr_rel_rotm = first_rotm.T @ curr_rotm curr_rel_rpy = rotm2euler(curr_rel_rotm) curr_rel_xyz = np.dot(first_rotm.T, curr_xyz - first_xyz) curr_gripper = states[curr_frame_id][6] curr_state = np.zeros(7, dtype=np.float32) curr_state[0:3] = curr_rel_xyz curr_state[3:6] = curr_rel_rpy curr_state[-1] = curr_gripper rel_states.append(curr_state) return torch.from_numpy(np.array(rel_states))
Media Structure
The media
directory is used to store videos recorded by the static camera (rgb.mp4
) and wrist camera (hand_rgb.mp4
). These videos have been aligned frame by frame.
Data Splits
The data fields are consistent across the train
and val
splits. Below are their proportions:
Name | Episodes | Samples |
---|---|---|
train | 7440 | 1,170,490 |
val | 638 | 97,985 |
Additionally, here is the number of trajectories for each task:
For train split:
{
"pick up the cucumber from the cutting board; place the picked object in the vegetable basket": 498,
"pick up the eggplant from the red plate; place the picked object on the table": 342,
"pick up the mandarin from the green plate; place the picked object on the table": 297,
"pick up the red mug from the rack; place the picked object on the table": 497,
"pick up the knife from the left of the white plate; place the picked object into the drawer": 261,
"pick up the black seasoning powder from the table; pour the black seasoning powder in the red bowl; place the picked object on the table": 385,
"pick up the eggplant from the green plate; place the picked object on the table": 248,
"pick up the potato from the vegetable basket; place the picked object on the cutting board": 496,
"pick up the green mug from the rack; place the picked object on the table": 496,
"pick up the potato from the cutting board; place the picked object in the vegetable basket": 500,
"pick up the mandarin from the green plate; place the picked object on the red plate": 66,
"pick up the cucumber from the vegetable basket; place the picked object on the cutting board": 498,
"pick up the knife from the right of the white plate; place the picked object into the drawer": 246,
"pick up the green bottle from the white box; place the picked object on the tray": 500,
"pick up the eggplant from the green plate; place the picked object on the red plate": 60,
"pick up the eggplant from the red plate; place the picked object on the green plate": 53,
"press the toaster switch": 499,
"open the oven": 500,
"close the oven": 498,
"open the drawer": 500
}
For val split:
{
"pick up the green bottle from the white box;place the picked object on the tray": 94,
"pick up the red mug from the rack;place the picked object on the table": 30,
"pick up the mandarin from the green plate;place the picked object on the table": 28,
"pick up the black seasoning powder from the table;pour the black seasoning powder in the red bowl;place the picked object on the table": 31,
"pick up the cucumber from the cutting board;place the picked object in the vegetable basket": 41,
"pick up the cucumber from the vegetable basket;place the picked object on the cutting board": 38,
"pick up the potato from the cutting board;place the picked object in the vegetable basket": 41,
"pick up the eggplant from the green plate;place the picked object on the red plate": 5,
"pick up the eggplant from the red plate;place the picked object on the table": 26,
"pick up the potato from the vegetable basket;place the picked object on the cutting board": 40,
"pick up the green mug from the rack;place the picked object on the table": 29,
"pick up the knife from the left of the white plate;place the picked object into the drawer": 10,
"pick up the eggplant from the green plate;place the picked object on the table": 20,
"pick up the knife from the right of the white plate;place the picked object into the drawer": 11,
"pick up the eggplant from the red plate;place the picked object on the green plate": 2,
"pick up the mandarin from the green plate;place the picked object on the red plate": 4,
"open the drawer": 60,
"press the toaster switch": 16,
"close the oven": 55,
"open the oven": 57
}
Personal and Sensitive Information
We do not find any personal or sensitive information in this benchmark.
Additional Information
Licensing Information
The BDRBench-20 is licensed under the Apache License.
Citation Information
@article{li2023generalist,
title={Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models},
author={Li, Xinghang and Li, Peiyan and Liu, Minghuan and Wang, Dong and Liu, Jirong and Kang, Bingyi and Ma, Xiao and Kong, Tao and Zhang, Hanbo and Liu, Huaping},
journal={arXiv preprint arXiv:2412.14058},
year={2024}
}
@article{cheang2024gr2generativevideolanguageactionmodel,
title={GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation},
author={Chi-Lam Cheang and Guangzeng Chen and Ya Jing and Tao Kong and Hang Li and Yifeng Li and Yuxiao Liu and Hongtao Wu and
Jiafeng Xu and Yichu Yang and Hanbo Zhang and Minzhao Zhu},
journal={arXiv preprint arXiv:2410.06158},
year={2024}
}
Contributions
This dataset is a co-work by all the members of the robotics research team at Bytedance Research.