Datasets:

ArXiv:
License:
File size: 11,351 Bytes
04b65a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed87ebf
 
04b65a5
 
23e73c2
 
 
 
 
 
 
 
 
04b65a5
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
---
license: apache-2.0
---
# Dataset Card for ByteDance Robot Benchmark with 20 Tasks (BDRBench-20)

## Table of Contents
- [Dataset Card for ByteDance Robot Benchmark with 20 Tasks (BDRBench-20)](#dataset-card-for-bytedance-robot-benchmark-with-20-tasks-bdrbench-20)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Dataset Structure](#dataset-structure)
      - [Annotation Structure](#annotation-structure)
      - [Media Structure](#media-structure)
      - [Data Splits](#data-splits)
    - [Personal and Sensitive Information](#personal-and-sensitive-information)
  - [Additional Information](#additional-information)
    - [Licensing Information](#licensing-information)
    - [Citation Information](#citation-information)
    - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [RoboVLMs](robovlms.github.io), [GR-2](https://gr2-manipulation.github.io/)
- **Repository:** [RoboVLMs](https://github.com/Robot-VLAs/RoboVLMs)
- **Contact:** kongtao@bytedance.com

### Dataset Summary

ByteDance Robot Benchmark (BDRBench-20) is a vision-language-action (VLA) dataset containing 8K high-quality trajectories. It was created to evaluate the performance of VLA models in real-world scenarios. This dataset includes 20 common manipulation tasks, such as pick-and-place, pouring, and open/close actions. The dataset is designed to be used for training and evaluating VLA models in real-world scenarios.

### Dataset Structure

The dataset is divided into `train` and `val` sets, with two subdirectories: `anns` (annotations) and `media` (videos). The `anns` directory contains annotation files for each subtask, while the `media` directory contains rollout videos for each task. 

For example, to collect a trajectory for the task "*pick up the cucumber from the cutting board; place the picked object in the vegetable basket*", the robots will be teleoperated to perform both the pick and place subtasks consecutively to improve efficiency. The rollout processes for these two subtasks are recorded in the same video, but their annotations are stored in separate files.

The detailed file structure is listed as follows:
```bash
Dataset
β”œβ”€β”€ anns # text, video path, actions
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ {id}.json
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ val
β”‚   β”‚   β”œβ”€β”€ {id}.json
β”‚   β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ media # videos
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ {id}
β”‚   β”‚   β”‚   β”œβ”€β”€ rgb.mp4
β”‚   β”‚   β”‚   β”œβ”€β”€ hand_rgb.mp4
β”‚   β”‚   β”œβ”€β”€ ...
```
#### Annotation Structure

Here, we provide a detailed explanation of the meaning of each key in the annotation JSON file (in `./anns`).

1) **"texts"**: This is a list containing a single string that describes the task in English.  
   Example: `["open the drawer"]`

2) **"videos"**: This is a list containing two dictionaries. The first dictionary corresponds to the video recorded by the static camera, and the second corresponds to the wrist camera. For each dictionary, the following keys are used:
   - `video_path`: The path to the video file.
   - `start`: The starting frame of the task in the video.
   - `end`: The ending frame of the task in the video.
   - The first dictionary also contains an additional key, `crop`, which specifies the cropping area for the video. It is recommended to use this key to crop the video during training in order to reduce the impact of irrelevant backgrounds.

   Example:
    ```python
    [
      {
          "video_path": "/media/val/0_5/rgb.mp4",
          "crop": [[45,200], [705,1000]],
          "start": 0,
          "end": 124
      },
      {
          "video_path": "/media/val/0_5/hand_rgb.mp4",
          "start": 0,
          "end": 124
      }
    ]
    ```

3) **"action"**: This is a list recording the action at every timestep, expressed in 7 dimensions: 3 for translation (x, y, z), 3 for Euler angles (rotation), and 1 for the gripper (open/close). Note that the action represents the changes in the relative state. Therefore, when using these data, you should also use relative states. That is, the state at timestep s<sub>t+1</sub> is expressed in the coordinate system of the end effector at timestep s<sub>t</sub>.

4) **"state"**: Similar to "action", the state is described in 7 dimensions (3 for translation, 3 for Euler angles, and 1 for gripper open/close), but it is expressed in a global coordinate system. Since the data are collected from different machines with varying global coordinates, it is recommended to use relative states if you want to train your model and deploy it in a different environment using the state data.

   Example code for calculating relative states:
    ```python
    # Example of how to get relative state
    def _get_relative_states(self, label, frame_ids):
        # Assume you have loaded the annotation file into 'label'
        # 'frame_ids' indicates the indexes of the states you want to use
        states = label['state']
        first_id = frame_ids[0]
        first_xyz = np.array(states[first_id][0:3])
        first_rpy = np.array(states[first_id][3:6])
        first_rotm = euler2rotm(first_rpy)
        first_gripper = states[first_id][6]
        first_state = np.zeros(7, dtype=np.float32)
        first_state[-1] = first_gripper
        rel_states = [first_state]
        for k in range(1, len(frame_ids)):
            curr_frame_id = frame_ids[k]
            curr_xyz = np.array(states[curr_frame_id][0:3])
            curr_rpy = np.array(states[curr_frame_id][3:6])
            curr_rotm = euler2rotm(curr_rpy)
            curr_rel_rotm = first_rotm.T @ curr_rotm
            curr_rel_rpy = rotm2euler(curr_rel_rotm)
            curr_rel_xyz = np.dot(first_rotm.T, curr_xyz - first_xyz)
            curr_gripper = states[curr_frame_id][6]
            curr_state = np.zeros(7, dtype=np.float32)
            curr_state[0:3] = curr_rel_xyz
            curr_state[3:6] = curr_rel_rpy
            curr_state[-1] = curr_gripper
            rel_states.append(curr_state)
        return torch.from_numpy(np.array(rel_states))
    ```

#### Media Structure

The `media` directory is used to store videos recorded by the static camera (`rgb.mp4`) and wrist camera (`hand_rgb.mp4`). These videos have been aligned frame by frame.

#### Data Splits

The data fields are consistent across the `train` and `val` splits. Below are their proportions:

| Name   | Episodes | Samples |
|--------|----------|---------|
| train  | 7440     | 1,170,490 |
| val    | 638      | 97,985 |

Additionally, here is the number of trajectories for each task:

```python
For train split:
{
    "pick up the cucumber from the cutting board; place the picked object in the vegetable basket": 498,
    "pick up the eggplant from the red plate; place the picked object on the table": 342,
    "pick up the mandarin from the green plate; place the picked object on the table": 297,
    "pick up the red mug from the rack; place the picked object on the table": 497,
    "pick up the knife from the left of the white plate; place the picked object into the drawer": 261,
    "pick up the black seasoning powder from the table; pour the black seasoning powder in the red bowl; place the picked object on the table": 385,
    "pick up the eggplant from the green plate; place the picked object on the table": 248,
    "pick up the potato from the vegetable basket; place the picked object on the cutting board": 496,
    "pick up the green mug from the rack; place the picked object on the table": 496,
    "pick up the potato from the cutting board; place the picked object in the vegetable basket": 500,
    "pick up the mandarin from the green plate; place the picked object on the red plate": 66,
    "pick up the cucumber from the vegetable basket; place the picked object on the cutting board": 498,
    "pick up the knife from the right of the white plate; place the picked object into the drawer": 246,
    "pick up the green bottle from the white box; place the picked object on the tray": 500,
    "pick up the eggplant from the green plate; place the picked object on the red plate": 60,
    "pick up the eggplant from the red plate; place the picked object on the green plate": 53,
    "press the toaster switch": 499,
    "open the oven": 500,
    "close the oven": 498,
    "open the drawer": 500
}
```
```python
For val split:
{
    "pick up the green bottle from the white box;place the picked object on the tray": 94,
    "pick up the red mug from the rack;place the picked object on the table": 30,
    "pick up the mandarin from the green plate;place the picked object on the table": 28,
    "pick up the black seasoning powder from the table;pour the black seasoning powder in the red bowl;place the picked object on the table": 31,
    "pick up the cucumber from the cutting board;place the picked object in the vegetable basket": 41,
    "pick up the cucumber from the vegetable basket;place the picked object on the cutting board": 38,
    "pick up the potato from the cutting board;place the picked object in the vegetable basket": 41,
    "pick up the eggplant from the green plate;place the picked object on the red plate": 5,
    "pick up the eggplant from the red plate;place the picked object on the table": 26,
    "pick up the potato from the vegetable basket;place the picked object on the cutting board": 40,
    "pick up the green mug from the rack;place the picked object on the table": 29,
    "pick up the knife from the left of the white plate;place the picked object into the drawer": 10,
    "pick up the eggplant from the green plate;place the picked object on the table": 20,
    "pick up the knife from the right of the white plate;place the picked object into the drawer": 11,
    "pick up the eggplant from the red plate;place the picked object on the green plate": 2,
    "pick up the mandarin from the green plate;place the picked object on the red plate": 4,
    "open the drawer": 60,
    "press the toaster switch": 16,
    "close the oven": 55,
    "open the oven": 57
}
```

### Personal and Sensitive Information

We do not find any personal or sensitive information in this benchmark.

## Additional Information

### Licensing Information

The BDRBench-20 is licensed under the [Apache License](https://www.apache.org/licenses/LICENSE-2.0).

### Citation Information

```
@article{li2023generalist,
    title={Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models},
    author={Li, Xinghang and Li, Peiyan and Liu, Minghuan and Wang, Dong and Liu, Jirong and Kang, Bingyi and Ma, Xiao and Kong, Tao and Zhang, Hanbo and Liu, Huaping},
    journal={arXiv preprint arXiv:2412.14058},
    year={2024}
}
```
```
@article{cheang2024gr2generativevideolanguageactionmodel,
title={GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation},
author={Chi-Lam Cheang and Guangzeng Chen and Ya Jing and Tao Kong and Hang Li and Yifeng Li and Yuxiao Liu and Hongtao Wu and
Jiafeng Xu and Yichu Yang and Hanbo Zhang and Minzhao Zhu},
journal={arXiv preprint arXiv:2410.06158},
year={2024}
}
```

### Contributions
This dataset is a co-work by all the members of the robotics research team at Bytedance Research.