Initial dataset release
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +7 -0
- README.md +71 -0
- assets/dataset_easy.gif +3 -0
- assets/dataset_hard.gif +3 -0
- assets/dataset_medium.gif +3 -0
- bpy_render_views.py +272 -0
- chair/easy/esim.bag +3 -0
- chair/easy/esim.conf +45 -0
- chair/hard/esim.bag +3 -0
- chair/hard/esim.conf +45 -0
- chair/medium/esim.bag +3 -0
- chair/medium/esim.conf +45 -0
- chair/views/test/depth/r_0.exr +3 -0
- chair/views/test/depth/r_1.exr +3 -0
- chair/views/test/depth/r_10.exr +3 -0
- chair/views/test/depth/r_100.exr +3 -0
- chair/views/test/depth/r_101.exr +3 -0
- chair/views/test/depth/r_102.exr +3 -0
- chair/views/test/depth/r_103.exr +3 -0
- chair/views/test/depth/r_104.exr +3 -0
- chair/views/test/depth/r_105.exr +3 -0
- chair/views/test/depth/r_106.exr +3 -0
- chair/views/test/depth/r_107.exr +3 -0
- chair/views/test/depth/r_108.exr +3 -0
- chair/views/test/depth/r_109.exr +3 -0
- chair/views/test/depth/r_11.exr +3 -0
- chair/views/test/depth/r_110.exr +3 -0
- chair/views/test/depth/r_111.exr +3 -0
- chair/views/test/depth/r_112.exr +3 -0
- chair/views/test/depth/r_113.exr +3 -0
- chair/views/test/depth/r_114.exr +3 -0
- chair/views/test/depth/r_115.exr +3 -0
- chair/views/test/depth/r_116.exr +3 -0
- chair/views/test/depth/r_117.exr +3 -0
- chair/views/test/depth/r_118.exr +3 -0
- chair/views/test/depth/r_119.exr +3 -0
- chair/views/test/depth/r_12.exr +3 -0
- chair/views/test/depth/r_120.exr +3 -0
- chair/views/test/depth/r_121.exr +3 -0
- chair/views/test/depth/r_122.exr +3 -0
- chair/views/test/depth/r_123.exr +3 -0
- chair/views/test/depth/r_124.exr +3 -0
- chair/views/test/depth/r_125.exr +3 -0
- chair/views/test/depth/r_126.exr +3 -0
- chair/views/test/depth/r_127.exr +3 -0
- chair/views/test/depth/r_128.exr +3 -0
- chair/views/test/depth/r_129.exr +3 -0
- chair/views/test/depth/r_13.exr +3 -0
- chair/views/test/depth/r_130.exr +3 -0
- chair/views/test/depth/r_131.exr +3 -0
.gitattributes
CHANGED
@@ -53,3 +53,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
56 |
+
|
57 |
+
# Blend files
|
58 |
+
*.blend filter=lfs diff=lfs merge=lfs -text
|
59 |
+
# Image files
|
60 |
+
*.exr filter=lfs diff=lfs merge=lfs -text
|
61 |
+
# ROS bags
|
62 |
+
*.bag filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pretty_name: Robust e-NeRF
|
3 |
+
paperswithcode_id: robust-e-nerf-synthetic-event-dataset
|
4 |
+
license: cc-by-4.0
|
5 |
+
viewer: false
|
6 |
+
size_categories:
|
7 |
+
- n<1K
|
8 |
+
---
|
9 |
+
|
10 |
+
# Robust *e*-NeRF Synthetic Event Dataset
|
11 |
+
|
12 |
+
<table style="display: block">
|
13 |
+
<tr>
|
14 |
+
<td><a href="https://wengflow.github.io/robust-e-nerf"><img src="https://img.shields.io/badge/Project_Page-black?style=for-the-badge" alt="Project Page"></a></td>
|
15 |
+
<td><a href="https://arxiv.org/abs/2309.08596"><img src="https://img.shields.io/badge/arXiv-black?style=for-the-badge" alt="arXiv"></a></td>
|
16 |
+
<td><a href="https://github.com/wengflow/robust-e-nerf"><img src="https://img.shields.io/badge/Code-black?style=for-the-badge" alt="Code"></a></td>
|
17 |
+
<td><a href="https://github.com/wengflow/rpg_esim"><img src="https://img.shields.io/badge/Simulator-black?style=for-the-badge" alt="Simulator"></a></td>
|
18 |
+
</tr>
|
19 |
+
</table>
|
20 |
+
|
21 |
+
<p align="center">
|
22 |
+
<img src="assets/dataset_easy.gif" alt="Easy" width=60%/>
|
23 |
+
<img src="assets/dataset_medium.gif" alt="Medium" width=60%/>
|
24 |
+
<img src="assets/dataset_hard.gif" alt="Hard" width=60%/>
|
25 |
+
</p>
|
26 |
+
|
27 |
+
This repository contains the synthetic event dataset used in [**Robust *e*-NeRF**](https://wengflow.github.io/robust-e-nerf) to study the collective effect of camera speed profile, contrast threshold variation and refractory period on the quality of NeRF reconstruction from a moving event camera. The dataset is simulated using an [improved version of ESIM](https://github.com/wengflow/rpg_esim) with three different camera configurations of increasing difficulty levels (*i.e.* *easy*, *medium* and *hard*) on seven Realistic Synthetic 360 scenes (adopted in the synthetic experiments of NeRF), resulting in a total of 21 sequence recordings. Please refer to the [Robust *e*-NeRF paper](https://arxiv.org/abs/2309.08596) for more details.
|
28 |
+
|
29 |
+
This synthetic event dataset allows for a retrospective comparison between event-based and image-based NeRF reconstruction methods, as the event sequences were simulated under highly similar conditions as the NeRF synthetic dataset. In particular, we adopt the same camera intrinsics and camera distance to the object at the origin. Furthermore, the event camera travels in a hemi-/spherical spiral motion about the object, thereby having a similar camera pose distribution for training. Apart from that, we also use the same test camera poses/views. Nonetheless, this new synthetic event dataset is not only specific to NeRF reconstruction, but also suitable for novel view synthesis, 3D reconstruction, localization and SLAM in general.
|
30 |
+
|
31 |
+
If you use this synthetic event dataset for your work, please cite:
|
32 |
+
|
33 |
+
```bibtex
|
34 |
+
@inproceedings{low2023_robust-e-nerf,
|
35 |
+
title = {Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion},
|
36 |
+
author = {Low, Weng Fei and Lee, Gim Hee},
|
37 |
+
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
|
38 |
+
year = {2023}
|
39 |
+
}
|
40 |
+
```
|
41 |
+
|
42 |
+
## Dataset Structure and Contents
|
43 |
+
|
44 |
+
This synthetic event dataset is organized first by scene, then by level of difficulty. Each sequence recording is given in the form of a [ROS bag](http://wiki.ros.org/rosbag) named `esim.bag`, with the following data streams:
|
45 |
+
|
46 |
+
| ROS Topic | Data | Publishing Rate (Hz) |
|
47 |
+
| :--- | :--- | :--- |
|
48 |
+
| `/cam0/events` | Events | - |
|
49 |
+
| `/cam0/pose` | Camera Pose | 1000 |
|
50 |
+
| `/imu` | IMU measurements with simulated noise | 1000 |
|
51 |
+
| `/cam0/image_raw` | RGB image | 250 |
|
52 |
+
| `/cam0/depthmap` | Depth map | 10 |
|
53 |
+
| `/cam0/optic_flow` | Optical flow map | 10 |
|
54 |
+
| `/cam0/camera_info` | Camera intrinsic and lens distortion parameters | 10
|
55 |
+
|
56 |
+
It is obtained by running the improved ESIM with the associated `esim.conf` configuration file, which references camera intrinsics configuration files `pinhole_mono_nodistort_f={1111, 1250}.yaml` and camera trajectory CSV files `{hemisphere, sphere}_spiral-rev=4[...].csv`.
|
57 |
+
|
58 |
+
The validation and test views of each scene are given in the `views/` folder, which is structured according to the NeRF synthetic dataset (except for the depth and normal maps). These views are rendered from the scene Blend-files, given in the `scenes/` folder. Specifically, we create a [Conda](https://docs.conda.io/en/latest/) environment with [Blender as a Python module](https://docs.blender.org/api/current/info_advanced_blender_as_bpy.html) installed, according to [these instructions](https://github.com/wengflow/rpg_esim#blender), to run the `bpy_render_views.py` Python script for rendering the evaluation views.
|
59 |
+
|
60 |
+
## Setup
|
61 |
+
|
62 |
+
1. Install [Git LFS](https://git-lfs.com/) according to the [official instructions](https://github.com/git-lfs/git-lfs?utm_source=gitlfs_site&utm_medium=installation_link&utm_campaign=gitlfs#installing).
|
63 |
+
2. Setup Git LFS for your user account with:
|
64 |
+
```bash
|
65 |
+
git lfs install
|
66 |
+
```
|
67 |
+
3. Clone this dataset repository into the desired destination directory with:
|
68 |
+
```bash
|
69 |
+
git lfs clone https://huggingface.co/datasets/wengflow/robust-e-nerf
|
70 |
+
```
|
71 |
+
4. To minimize disk usage, remove the `.git/` folder. However, this would complicate the pulling of changes in this upstream dataset repository.
|
assets/dataset_easy.gif
ADDED
Git LFS Details
|
assets/dataset_hard.gif
ADDED
Git LFS Details
|
assets/dataset_medium.gif
ADDED
Git LFS Details
|
bpy_render_views.py
ADDED
@@ -0,0 +1,272 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Adapted from `360_view.py` & `360_view_test.py` in the original NeRF synthetic
|
3 |
+
Blender dataset blend-files.
|
4 |
+
"""
|
5 |
+
import argparse
|
6 |
+
import os
|
7 |
+
import json
|
8 |
+
from math import radians
|
9 |
+
import bpy
|
10 |
+
import numpy as np
|
11 |
+
|
12 |
+
|
13 |
+
COLOR_SPACES = [ "display", "linear" ]
|
14 |
+
DEVICES = [ "cpu", "cuda", "optix" ]
|
15 |
+
|
16 |
+
CIRCLE_FIXED_START = ( 0, 0, 0 )
|
17 |
+
CIRCLE_FIXED_END = ( .7, 0, 0 )
|
18 |
+
|
19 |
+
|
20 |
+
def listify_matrix(matrix):
|
21 |
+
matrix_list = []
|
22 |
+
for row in matrix:
|
23 |
+
matrix_list.append(list(row))
|
24 |
+
return matrix_list
|
25 |
+
|
26 |
+
|
27 |
+
def parent_obj_to_camera(b_camera):
|
28 |
+
origin = (0, 0, 0)
|
29 |
+
b_empty = bpy.data.objects.new("Empty", None)
|
30 |
+
b_empty.location = origin
|
31 |
+
b_camera.parent = b_empty # setup parenting
|
32 |
+
|
33 |
+
scn = bpy.context.scene
|
34 |
+
scn.collection.objects.link(b_empty)
|
35 |
+
bpy.context.view_layer.objects.active = b_empty
|
36 |
+
|
37 |
+
return b_empty
|
38 |
+
|
39 |
+
|
40 |
+
def main(args):
|
41 |
+
# open the scene blend-file
|
42 |
+
bpy.ops.wm.open_mainfile(filepath=args.blend_path)
|
43 |
+
|
44 |
+
# initialize render settings
|
45 |
+
scene = bpy.data.scenes["Scene"]
|
46 |
+
scene.render.engine = "CYCLES"
|
47 |
+
scene.render.use_persistent_data = True
|
48 |
+
|
49 |
+
if args.device == "cpu":
|
50 |
+
bpy.context.preferences.addons["cycles"].preferences \
|
51 |
+
.compute_device_type = "NONE"
|
52 |
+
bpy.context.scene.cycles.device = "CPU"
|
53 |
+
elif args.device == "cuda":
|
54 |
+
bpy.context.preferences.addons["cycles"].preferences \
|
55 |
+
.compute_device_type = "CUDA"
|
56 |
+
bpy.context.scene.cycles.device = "GPU"
|
57 |
+
elif args.device == "optix":
|
58 |
+
bpy.context.preferences.addons["cycles"].preferences \
|
59 |
+
.compute_device_type = "OPTIX"
|
60 |
+
bpy.context.scene.cycles.device = "GPU"
|
61 |
+
bpy.context.preferences.addons["cycles"].preferences.get_devices()
|
62 |
+
|
63 |
+
# initialize compositing nodes
|
64 |
+
scene.view_layers[0].use_pass_combined = True
|
65 |
+
scene.use_nodes = True
|
66 |
+
tree = scene.node_tree
|
67 |
+
|
68 |
+
if args.depth:
|
69 |
+
scene.view_layers[0].use_pass_z = True
|
70 |
+
combine_color = tree.nodes.new("CompositorNodeCombineColor")
|
71 |
+
depth_output = tree.nodes.new("CompositorNodeOutputFile")
|
72 |
+
if args.normal:
|
73 |
+
scene.view_layers[0].use_pass_normal = True
|
74 |
+
normal_output = tree.nodes.new("CompositorNodeOutputFile")
|
75 |
+
if args.depth or args.normal:
|
76 |
+
render_layers = tree.nodes.new("CompositorNodeRLayers")
|
77 |
+
|
78 |
+
# initialize RGB render image output settings
|
79 |
+
scene.render.filepath = args.renders_path
|
80 |
+
scene.render.use_file_extension = True
|
81 |
+
scene.render.use_overwrite = True
|
82 |
+
scene.render.image_settings.color_mode = "RGBA"
|
83 |
+
|
84 |
+
if args.color_space == "display":
|
85 |
+
scene.render.image_settings.file_format = "PNG"
|
86 |
+
scene.render.image_settings.color_depth = "8"
|
87 |
+
scene.render.image_settings.color_management = "FOLLOW_SCENE"
|
88 |
+
elif args.color_space == "linear":
|
89 |
+
scene.render.image_settings.file_format = "OPEN_EXR"
|
90 |
+
scene.render.image_settings.color_depth = "32"
|
91 |
+
scene.render.image_settings.use_zbuffer = False
|
92 |
+
|
93 |
+
if args.depth:
|
94 |
+
# initialize depth render image output settings
|
95 |
+
depth_output.base_path = os.path.join(args.renders_path, "depth")
|
96 |
+
depth_output.file_slots[0].use_node_format = True
|
97 |
+
scene.frame_set(0)
|
98 |
+
|
99 |
+
depth_output.format.file_format = "OPEN_EXR"
|
100 |
+
depth_output.format.color_mode = "RGB"
|
101 |
+
depth_output.format.color_depth = "32"
|
102 |
+
depth_output.format.exr_codec = "NONE"
|
103 |
+
depth_output.format.use_zbuffer = False
|
104 |
+
|
105 |
+
# link compositing nodes
|
106 |
+
links = tree.links
|
107 |
+
|
108 |
+
# output depth img (RGB img is output via the existing composite node)
|
109 |
+
combine_color.mode = "RGB"
|
110 |
+
links.new(render_layers.outputs["Depth"], combine_color.inputs["Red"])
|
111 |
+
combine_color.inputs["Green"].default_value = 0
|
112 |
+
combine_color.inputs["Blue"].default_value = 0
|
113 |
+
combine_color.inputs["Alpha"].default_value = 1
|
114 |
+
|
115 |
+
links.new(combine_color.outputs["Image"], depth_output.inputs["Image"])
|
116 |
+
|
117 |
+
if args.normal:
|
118 |
+
# initialize normal render image output settings
|
119 |
+
normal_output.base_path = os.path.join(args.renders_path, "normal")
|
120 |
+
normal_output.file_slots[0].use_node_format = True
|
121 |
+
scene.frame_set(0)
|
122 |
+
|
123 |
+
normal_output.format.file_format = "OPEN_EXR"
|
124 |
+
normal_output.format.color_mode = "RGB"
|
125 |
+
normal_output.format.color_depth = "32"
|
126 |
+
normal_output.format.exr_codec = "NONE"
|
127 |
+
normal_output.format.use_zbuffer = False
|
128 |
+
|
129 |
+
# link compositing nodes
|
130 |
+
links = tree.links
|
131 |
+
|
132 |
+
# output normal img (RGB img is output via the existing composite node)
|
133 |
+
combine_color.mode = "RGB"
|
134 |
+
links.new(render_layers.outputs["Normal"],
|
135 |
+
normal_output.inputs["Image"])
|
136 |
+
|
137 |
+
# initialize camera settings
|
138 |
+
scene.render.dither_intensity = 0.0
|
139 |
+
scene.render.film_transparent = True
|
140 |
+
scene.render.resolution_percentage = 100
|
141 |
+
scene.render.resolution_x = args.resolution[0]
|
142 |
+
scene.render.resolution_y = args.resolution[1]
|
143 |
+
|
144 |
+
cam = bpy.data.objects["Camera"]
|
145 |
+
cam.location = (0, 4.0, 0.5)
|
146 |
+
cam.rotation_mode = "XYZ"
|
147 |
+
cam_constraint = cam.constraints.new(type="TRACK_TO")
|
148 |
+
cam_constraint.track_axis = "TRACK_NEGATIVE_Z"
|
149 |
+
cam_constraint.up_axis = "UP_Y"
|
150 |
+
b_empty = parent_obj_to_camera(cam)
|
151 |
+
cam_constraint.target = b_empty
|
152 |
+
|
153 |
+
# preprocess & derive paths
|
154 |
+
args.renders_path = os.path.normpath(args.renders_path) # remove trailing slashes
|
155 |
+
folder_name = os.path.basename(args.renders_path)
|
156 |
+
renders_parent_path = os.path.dirname(args.renders_path)
|
157 |
+
transforms_path = os.path.join(
|
158 |
+
renders_parent_path, f"transforms_{folder_name}.json"
|
159 |
+
)
|
160 |
+
|
161 |
+
# render novel views
|
162 |
+
stepsize = 360.0 / args.num_views
|
163 |
+
if not args.random_views:
|
164 |
+
vertical_diff = CIRCLE_FIXED_END[0] - CIRCLE_FIXED_START[0]
|
165 |
+
b_empty.rotation_euler = CIRCLE_FIXED_START
|
166 |
+
b_empty.rotation_euler[0] = CIRCLE_FIXED_START[0] + vertical_diff
|
167 |
+
|
168 |
+
out_data = {
|
169 |
+
"camera_angle_x": cam.data.angle_x,
|
170 |
+
"frames": []
|
171 |
+
}
|
172 |
+
for i in range(0, args.num_views):
|
173 |
+
if args.random_views:
|
174 |
+
if args.upper_views:
|
175 |
+
rot = np.random.uniform(0, 1, size=3) * (1,0,2*np.pi)
|
176 |
+
rot[0] = np.abs(np.arccos(1 - 2 * rot[0]) - np.pi/2)
|
177 |
+
b_empty.rotation_euler = rot
|
178 |
+
else:
|
179 |
+
b_empty.rotation_euler = np.random.uniform(0, 2*np.pi, size=3)
|
180 |
+
else:
|
181 |
+
print("Rotation {}, {}".format((stepsize * i), radians(stepsize * i)))
|
182 |
+
|
183 |
+
scene.render.filepath = os.path.join(args.renders_path, f"r_{i}")
|
184 |
+
if args.depth:
|
185 |
+
depth_output.file_slots[0].path = f"r_{i}"
|
186 |
+
if args.normal:
|
187 |
+
normal_output.file_slots[0].path = f"r_{i}"
|
188 |
+
bpy.ops.render.render(write_still=True)
|
189 |
+
|
190 |
+
# remove the "0000" suffix in the depth & normal map filenames
|
191 |
+
if args.depth:
|
192 |
+
os.rename(os.path.join(depth_output.base_path, f"r_{i}0000.exr"),
|
193 |
+
os.path.join(depth_output.base_path, f"r_{i}.exr"))
|
194 |
+
if args.normal:
|
195 |
+
os.rename(os.path.join(normal_output.base_path, f"r_{i}0000.exr"),
|
196 |
+
os.path.join(normal_output.base_path, f"r_{i}.exr"))
|
197 |
+
|
198 |
+
frame_data = {
|
199 |
+
"file_path": os.path.join(".", os.path.relpath(
|
200 |
+
scene.render.filepath, start=renders_parent_path
|
201 |
+
)),
|
202 |
+
"rotation": radians(stepsize),
|
203 |
+
"transform_matrix": listify_matrix(cam.matrix_world)
|
204 |
+
}
|
205 |
+
out_data["frames"].append(frame_data)
|
206 |
+
|
207 |
+
if args.random_views:
|
208 |
+
if args.upper_views:
|
209 |
+
rot = np.random.uniform(0, 1, size=3) * (1,0,2*np.pi)
|
210 |
+
rot[0] = np.abs(np.arccos(1 - 2 * rot[0]) - np.pi/2)
|
211 |
+
b_empty.rotation_euler = rot
|
212 |
+
else:
|
213 |
+
b_empty.rotation_euler = np.random.uniform(0, 2*np.pi, size=3)
|
214 |
+
else:
|
215 |
+
b_empty.rotation_euler[0] = (
|
216 |
+
CIRCLE_FIXED_START[0]
|
217 |
+
+ (np.cos(radians(stepsize*i))+1)/2 * vertical_diff
|
218 |
+
)
|
219 |
+
b_empty.rotation_euler[2] += radians(2*stepsize)
|
220 |
+
|
221 |
+
with open(transforms_path, "w") as out_file:
|
222 |
+
json.dump(out_data, out_file, indent=4)
|
223 |
+
|
224 |
+
|
225 |
+
if __name__ == "__main__":
|
226 |
+
parser = argparse.ArgumentParser(
|
227 |
+
description=("Script for rendering novel views of"
|
228 |
+
" synthetic Blender scenes.")
|
229 |
+
)
|
230 |
+
parser.add_argument(
|
231 |
+
"blend_path", type=str,
|
232 |
+
help="Path to the blend-file of the synthetic Blender scene."
|
233 |
+
)
|
234 |
+
parser.add_argument(
|
235 |
+
"renders_path", type=str,
|
236 |
+
help="Desired path to the novel view renders."
|
237 |
+
)
|
238 |
+
parser.add_argument(
|
239 |
+
"num_views", type=int,
|
240 |
+
help="Number of novel view renders."
|
241 |
+
)
|
242 |
+
parser.add_argument(
|
243 |
+
"resolution", type=int, nargs=2,
|
244 |
+
help="Image resolution of the novel view renders."
|
245 |
+
)
|
246 |
+
parser.add_argument(
|
247 |
+
"--color_space", type=str, choices=COLOR_SPACES, default="display",
|
248 |
+
help="Color space of the output novel view images."
|
249 |
+
)
|
250 |
+
parser.add_argument(
|
251 |
+
"--device", type=str, choices=DEVICES, default="cpu",
|
252 |
+
help="Compute device type for rendering."
|
253 |
+
)
|
254 |
+
parser.add_argument(
|
255 |
+
"--random_views", action="store_true",
|
256 |
+
help="Randomly sample novel views."
|
257 |
+
)
|
258 |
+
parser.add_argument(
|
259 |
+
"--upper_views", action="store_true",
|
260 |
+
help="Only sample novel views from the upper hemisphere."
|
261 |
+
)
|
262 |
+
parser.add_argument(
|
263 |
+
"--depth", action="store_true",
|
264 |
+
help="Render depth maps too."
|
265 |
+
)
|
266 |
+
parser.add_argument(
|
267 |
+
"--normal", action="store_true",
|
268 |
+
help="Render normal maps too."
|
269 |
+
)
|
270 |
+
args = parser.parse_args()
|
271 |
+
|
272 |
+
main(args)
|
chair/easy/esim.bag
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3f845365b05b35920dba1353beafdd3d087ac879dccede14daf386b6102da1c
|
3 |
+
size 5586808631
|
chair/easy/esim.conf
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
--vmodule=data_provider_online_render=0
|
2 |
+
--random_seed=50
|
3 |
+
--data_source=0
|
4 |
+
--path_to_output_bag=/data/wflow/datasets/robust-e-nerf/chair/easy/esim.bag
|
5 |
+
|
6 |
+
--contrast_threshold_pos=0.25
|
7 |
+
--contrast_threshold_neg=0.25
|
8 |
+
--contrast_threshold_sigma_pos=0
|
9 |
+
--contrast_threshold_sigma_neg=0
|
10 |
+
--refractory_period_ns=0
|
11 |
+
|
12 |
+
--exposure_time_ms=0.0
|
13 |
+
--use_log_image=1
|
14 |
+
--log_eps=0.00196078431
|
15 |
+
# 0.5/255
|
16 |
+
--simulate_color_events=true
|
17 |
+
|
18 |
+
--calib_filename=/data/wflow/datasets/robust-e-nerf/pinhole_mono_nodistort_f=1111.yaml
|
19 |
+
|
20 |
+
--renderer_type=4
|
21 |
+
--blend_file=/data/wflow/datasets/robust-e-nerf/scenes/chair.blend
|
22 |
+
--blender_bridge_port=5558
|
23 |
+
--blender_render_device_type=2
|
24 |
+
--blender_render_device_id=6
|
25 |
+
--blender_interm_color_space=0
|
26 |
+
--blender_interm_rgba_file=/tmp/robust_e_nerf_rgba_chair-easy
|
27 |
+
--blender_interm_depth_file=/tmp/robust_e_nerf_depth_chair-easy
|
28 |
+
|
29 |
+
--trajectory_type=1
|
30 |
+
--trajectory_spline_order=5
|
31 |
+
--trajectory_num_spline_segments=400
|
32 |
+
--trajectory_lambda=0.0
|
33 |
+
--trajectory_csv_file=/data/wflow/datasets/robust-e-nerf/hemisphere_spiral-rev=4.csv
|
34 |
+
--trajectory_csv_file_rotation_repr=1
|
35 |
+
|
36 |
+
--simulation_minimum_framerate=50.0
|
37 |
+
--simulation_imu_rate=1000.0
|
38 |
+
--simulation_adaptive_sampling_method=1
|
39 |
+
--simulation_adaptive_sampling_lambda=0.5
|
40 |
+
|
41 |
+
--ros_publisher_frame_rate=250
|
42 |
+
--ros_publisher_depth_rate=10
|
43 |
+
--ros_publisher_optic_flow_rate=10
|
44 |
+
--ros_publisher_pointcloud_rate=10
|
45 |
+
--ros_publisher_camera_info_rate=10
|
chair/hard/esim.bag
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d5757e20acbe6795931db2ab85ed0badfa8dbc616fa6c54c89d1331ed34c448
|
3 |
+
size 9778157277
|
chair/hard/esim.conf
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
--vmodule=data_provider_online_render=0
|
2 |
+
--random_seed=50
|
3 |
+
--data_source=0
|
4 |
+
--path_to_output_bag=/data/wflow/datasets/robust-e-nerf/chair/hard/esim.bag
|
5 |
+
|
6 |
+
--contrast_threshold_pos=0.25
|
7 |
+
--contrast_threshold_neg=0.25
|
8 |
+
--contrast_threshold_sigma_pos=0.06
|
9 |
+
--contrast_threshold_sigma_neg=0.06
|
10 |
+
--refractory_period_ns=25000000
|
11 |
+
|
12 |
+
--exposure_time_ms=0.0
|
13 |
+
--use_log_image=1
|
14 |
+
--log_eps=0.00196078431
|
15 |
+
# 0.5/255
|
16 |
+
--simulate_color_events=true
|
17 |
+
|
18 |
+
--calib_filename=/data/wflow/datasets/robust-e-nerf/pinhole_mono_nodistort_f=1111.yaml
|
19 |
+
|
20 |
+
--renderer_type=4
|
21 |
+
--blend_file=/data/wflow/datasets/robust-e-nerf/scenes/chair.blend
|
22 |
+
--blender_bridge_port=5555
|
23 |
+
--blender_render_device_type=2
|
24 |
+
--blender_render_device_id=6
|
25 |
+
--blender_interm_color_space=0
|
26 |
+
--blender_interm_rgba_file=/tmp/robust_e_nerf_rgba_chair-hard
|
27 |
+
--blender_interm_depth_file=/tmp/robust_e_nerf_depth_chair-hard
|
28 |
+
|
29 |
+
--trajectory_type=1
|
30 |
+
--trajectory_spline_order=5
|
31 |
+
--trajectory_num_spline_segments=2800
|
32 |
+
--trajectory_lambda=0.0
|
33 |
+
--trajectory_csv_file=/data/wflow/datasets/robust-e-nerf/hemisphere_spiral-rev=4-num_samples=16001-speed_osc_period=1e+9-speed_osc_scale=8.csv
|
34 |
+
--trajectory_csv_file_rotation_repr=1
|
35 |
+
|
36 |
+
--simulation_minimum_framerate=50.0
|
37 |
+
--simulation_imu_rate=1000.0
|
38 |
+
--simulation_adaptive_sampling_method=1
|
39 |
+
--simulation_adaptive_sampling_lambda=0.5
|
40 |
+
|
41 |
+
--ros_publisher_frame_rate=250
|
42 |
+
--ros_publisher_depth_rate=10
|
43 |
+
--ros_publisher_optic_flow_rate=10
|
44 |
+
--ros_publisher_pointcloud_rate=10
|
45 |
+
--ros_publisher_camera_info_rate=10
|
chair/medium/esim.bag
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:035dc9bdcc626d8517c364f91ccdcabbb00ecb9865c42b32a357d111f1f8f405
|
3 |
+
size 6616454968
|
chair/medium/esim.conf
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
--vmodule=data_provider_online_render=0
|
2 |
+
--random_seed=50
|
3 |
+
--data_source=0
|
4 |
+
--path_to_output_bag=/data/wflow/datasets/robust-e-nerf/chair/medium/esim.bag
|
5 |
+
|
6 |
+
--contrast_threshold_pos=0.25
|
7 |
+
--contrast_threshold_neg=0.25
|
8 |
+
--contrast_threshold_sigma_pos=0.03
|
9 |
+
--contrast_threshold_sigma_neg=0.03
|
10 |
+
--refractory_period_ns=8000000
|
11 |
+
|
12 |
+
--exposure_time_ms=0.0
|
13 |
+
--use_log_image=1
|
14 |
+
--log_eps=0.00196078431
|
15 |
+
# 0.5/255
|
16 |
+
--simulate_color_events=true
|
17 |
+
|
18 |
+
--calib_filename=/data/wflow/datasets/robust-e-nerf/pinhole_mono_nodistort_f=1111.yaml
|
19 |
+
# a5000
|
20 |
+
--renderer_type=4
|
21 |
+
--blend_file=/data/wflow/datasets/robust-e-nerf/scenes/chair.blend
|
22 |
+
--blender_bridge_port=5555
|
23 |
+
--blender_render_device_type=2
|
24 |
+
--blender_render_device_id=5
|
25 |
+
--blender_interm_color_space=0
|
26 |
+
--blender_interm_rgba_file=/tmp/robust_e_nerf_rgba_chair-medium
|
27 |
+
--blender_interm_depth_file=/tmp/robust_e_nerf_depth_chair-medium
|
28 |
+
|
29 |
+
--trajectory_type=1
|
30 |
+
--trajectory_spline_order=5
|
31 |
+
--trajectory_num_spline_segments=1200
|
32 |
+
--trajectory_lambda=0.0
|
33 |
+
--trajectory_csv_file=/data/wflow/datasets/robust-e-nerf/hemisphere_spiral-rev=4-num_samples=8001-speed_osc_period=1e+9-speed_osc_scale=4.csv
|
34 |
+
--trajectory_csv_file_rotation_repr=1
|
35 |
+
|
36 |
+
--simulation_minimum_framerate=50.0
|
37 |
+
--simulation_imu_rate=1000.0
|
38 |
+
--simulation_adaptive_sampling_method=1
|
39 |
+
--simulation_adaptive_sampling_lambda=0.5
|
40 |
+
|
41 |
+
--ros_publisher_frame_rate=250
|
42 |
+
--ros_publisher_depth_rate=10
|
43 |
+
--ros_publisher_optic_flow_rate=10
|
44 |
+
--ros_publisher_pointcloud_rate=10
|
45 |
+
--ros_publisher_camera_info_rate=10
|
chair/views/test/depth/r_0.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_1.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_10.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_100.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_101.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_102.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_103.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_104.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_105.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_106.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_107.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_108.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_109.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_11.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_110.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_111.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_112.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_113.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_114.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_115.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_116.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_117.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_118.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_119.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_12.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_120.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_121.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_122.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_123.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_124.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_125.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_126.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_127.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_128.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_129.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_13.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_130.exr
ADDED
Git LFS Details
|
chair/views/test/depth/r_131.exr
ADDED
Git LFS Details
|