xiaoxiaoxh's picture
Upload 1495 files
7614d38

Samples of VR-Folding dataset

Data structure

We provide 1 video sequence for Folding task.

RGB images

The following diretory contains RGB images of the video sequence rendered with Unity. Note that these images are only for visualization, so we have rendered both hands additionally.

  • Tshirt_folding_hands_rgb

Processed Data: Zarr data

All the multi-view RGB-D images will be transformed into point clouds and merged together into zarr format. All the other annotations are also contained in the data with zarr format. The following diretory contain samples for Folding task in zarr format.

  • VR_Folding/vr_simulation_folding_dataset_example.zarr/Tshirt

Here is the detailed tree structure of a data example of one frame.

00068_Tshirt_000000_000000
 β”œβ”€β”€ grip_vertex_id
 β”‚   β”œβ”€β”€ left_grip_vertex_id (1,) int32
 β”‚   └── right_grip_vertex_id (1,) int32
 β”œβ”€β”€ hand_pose
 β”‚   β”œβ”€β”€ left_hand_euler (25, 3) float32
 β”‚   β”œβ”€β”€ left_hand_pos (25, 3) float32
 β”‚   β”œβ”€β”€ right_hand_euler (25, 3) float32
 β”‚   └── right_hand_pos (25, 3) float32
 β”œβ”€β”€ marching_cube_mesh
 β”‚   β”œβ”€β”€ is_vertex_on_surface (6410,) bool
 β”‚   β”œβ”€β”€ marching_cube_faces (12816, 3) int32
 β”‚   └── marching_cube_verts (6410, 3) float32
 β”œβ”€β”€ mesh
 β”‚   β”œβ”€β”€ cloth_faces_tri (8312, 3) int32
 β”‚   β”œβ”€β”€ cloth_nocs_verts (4434, 3) float32
 β”‚   └── cloth_verts (4434, 3) float32
 └── point_cloud
     β”œβ”€β”€ cls (30000,) uint8
     β”œβ”€β”€ nocs (30000, 3) float16
     β”œβ”€β”€ point (30000, 3) float16
     β”œβ”€β”€ rgb (30000, 3) uint8
     └── sizes (4,) int64

Visualization

We provide a simple script for visualizing data in zarr format. This script will filter out the static frames (i.e. garment pose remains unchanged) in the video and only visualize dynamic frames.

Setup

Requirements: Python >= 3.8

This code has been tested on Windows 10 and Ubuntu 18.04.

pip install -r requirements.txt

Run

python vis_samples.py

This script will use Open3D to visualize the following elements:

  • the input partial point cloud with colors
  • the grasping points of both hands (represented by blue and red spheres)
  • the complete GT mesh colored with NOCS coordinates

Note that our recorded data in Zarr format contains complete hand poses (positions and euler angles of 25 bones for each hand).

In this simplified 3D visualization script, we only visualize the valid grasping points on the garment surface instead of complete hands for fast implementation.