# Samples of VR-Folding dataset ## Data structure We provide 1 video sequence for *Folding* task. ### RGB images The following diretory contains RGB images of the video sequence rendered with Unity. Note that these images are only for visualization, so we have rendered both hands additionally. - `Tshirt_folding_hands_rgb` ### Processed Data: Zarr data All the multi-view RGB-D images will be transformed into point clouds and merged together into [zarr](https://zarr.readthedocs.io/en/stable/) format. All the other annotations are also contained in the data with [zarr](https://zarr.readthedocs.io/en/stable/) format. The following diretory contain samples for *Folding* task in [zarr](https://zarr.readthedocs.io/en/stable/) format. - `VR_Folding/vr_simulation_folding_dataset_example.zarr/Tshirt` Here is the detailed tree structure of a data example of one frame. ``` 00068_Tshirt_000000_000000 ├── grip_vertex_id │ ├── left_grip_vertex_id (1,) int32 │ └── right_grip_vertex_id (1,) int32 ├── hand_pose │ ├── left_hand_euler (25, 3) float32 │ ├── left_hand_pos (25, 3) float32 │ ├── right_hand_euler (25, 3) float32 │ └── right_hand_pos (25, 3) float32 ├── marching_cube_mesh │ ├── is_vertex_on_surface (6410,) bool │ ├── marching_cube_faces (12816, 3) int32 │ └── marching_cube_verts (6410, 3) float32 ├── mesh │ ├── cloth_faces_tri (8312, 3) int32 │ ├── cloth_nocs_verts (4434, 3) float32 │ └── cloth_verts (4434, 3) float32 └── point_cloud ├── cls (30000,) uint8 ├── nocs (30000, 3) float16 ├── point (30000, 3) float16 ├── rgb (30000, 3) uint8 └── sizes (4,) int64 ``` ## Visualization We provide a simple script for visualizing data in [zarr](https://zarr.readthedocs.io/en/stable/) format. This script will filter out the static frames (i.e. garment pose remains unchanged) in the video and only visualize dynamic frames. ### Setup Requirements: Python >= 3.8 This code has been tested on Windows 10 and Ubuntu 18.04. ``` pip install -r requirements.txt ``` ### Run ``` python vis_samples.py ``` This script will use Open3D to visualize the following elements: - the input partial point cloud with colors - the grasping points of both hands (represented by blue and red spheres) - the complete GT mesh colored with NOCS coordinates Note that our recorded data in Zarr format contains complete hand poses (positions and euler angles of 25 bones for each hand). In this simplified 3D visualization script, we only visualize the valid grasping points on the garment surface instead of complete hands for fast implementation.