lavn / README.md
visnavdataset's picture
Upload 2 files
779f244 verified
|
raw
history blame
4.34 kB

LAVN Dataset

Data Organization

After downloading and unzipping the zip files, please reorganize the files in the following tructure:

LAVN
   |--src
      |--makeData_virtual.py
      |--makeData_real.py
      ...
   |--Virtual
      |--Gibson
         |--traj_<SCENE_ID>
            |--worker_graph.json
            |--rgb_<FRAME_ID>.jpg
            |--depth_<FRAME_ID>.jpg
         |--traj_Ackermanville
            |--worker_graph.json
            |--rgb_00001.jpg
            |--rgb_00002.jpg
            ...
            |--depth_00001.jpg
            |--depth_00002.jpg
            ...
         ...
      |--Matterport
         |--traj_<SCENE_ID>
            |--worker_graph.json
            |--rgb_<FRAME_ID>.jpg
            |--depth_<FRAME_ID>.jpg
         |--traj_00000-kfPV7w3FaU5
            |--worker_graph.json
            |--rgb_00001.jpg
            |--rgb_00002.jpg
            ...
            |--depth_00001.jpg
            |--depth_00002.jpg
            ...
         ...
   |--Real
      |--Campus
         |--worker_graph.json
         |--traj_480p_<SCENE_ID>
            |--rgb_<FRAME_ID>.jpg
         |--traj_480p_scene00
            |--rgb_00001.jpg

where the main landmark annotation scripts makeData_virtual.py and makeData_real.py are in folder (1) src. (2) Virtual and (3) Real stores trajectories collecetd in the simulation and real world, respectively. In each trajectory's data is collected in the following format:

         |--traj_<SCENE_ID>
            |--worker_graph.json
            |--rgb_<FRAME_ID>.jpg
            |--depth_<FRAME_ID>.jpg

where <SCENE_ID> matches exactly the original one in Gibson and Matterport run by the photo-realistic simulator Habitat. Images are saved in either .jpg or .png format. Note that rgb images are the main visual representation while depth is the auxiliary visual information captured only in the virtual environment.

worker_graph.json stores the meta data in dictionary in Python saved in json file with the following format:

{"node<NODE_ID>":
  {"img_path": "./human_click_dataset/traj_<SCENE_ID>/rgb_<FRAME_ID>.jpg",
   "depth_path": "./human_click_dataset/traj_<SCENE_ID>/depth_<FRAME_ID>.png",
   "location": [<LOC_X>, <LOC_Y>, <LOC_Z>],
   "orientation": <ORIENT>,
   "click_point": [<COOR_X>, <COOR_Y>],
   "reason": ""},
  ...
 "node0":
  {"img_path": "./human_click_dataset/traj_00101-n8AnEznQQpv/rgb_00002.jpg",
   "depth_path": "./human_click_dataset/traj_00101-n8AnEznQQpv/depth_00002.jpg",
   "location": [0.7419548034667969, -2.079209327697754, -0.5635206699371338],
   "orientation": 0.2617993967423121,
   "click_point": [270, 214],
   "reason": ""}
 ...
 "edges":...
 "goal_location": null,
 "start_location": [<LOC_X>, <LOC_Y>, <LOC_Z>],
 "landmarks": [[[<COOR_X>, <COOR_Y>], <FRAME_ID>], ...],
 "actions": ["ACTION_NAME", "turn_right", "move_forward", "turn_right", ...]
 "env_name": <SCENE_ID>
}

where [<LOC_X>, <LOC_Y>, <LOC_Z>] is the 3-axis location vector, <ORIENT> is the orientation only in simulation. [<COOR_X>, <COOR_Y>] are the image coordinates of landmarks. ACTION_NAME stores the action of the robot take from the current frame to the next frame.

Long-Term Maintenance Plan

We will conduct a long-term maintenance plan to ensure the accessability and quality for future research:

Data Standards: Data formats will be checked regularly with scripts to validate data consistency.

Data Cleaning: Data in incorrect formats, missing data or contains invalid values will be removed.

Scheduled Updates: We set up montly schedule for data updates.

Storage Solutions: Zenodo with a DOI will be provided to as an public repository for online storage. A second copy will be stored in a private cloud server while a third copy will be stored in a local drive.

Data Backup: Once one of the copies in the aforementioned storage approach is detected inaccessible, it will be restored by one of the other two copies immediately.

Documentation: Our documentation will be updated regularly reflecting feedback from users.