shsolanki commited on
Commit
82c5dfc
·
1 Parent(s): 77e4be7
Files changed (1) hide show
  1. README.md +81 -3
README.md CHANGED
@@ -1,3 +1,81 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ # PhysicalAI Autonomous Vehicles NuRec-AV-Object-Benchmark
6
+
7
+ ## Dataset Description
8
+
9
+ The **NuRec-AV-Object-Benchmark** is an object-centric benchmark for evaluating image-to-3D reconstruction systems on autonomous vehicle data. Introduced alongside [**Asset Harvester**](https://huggingface.co/nvidia/asset-harvester), it is designed to support systematic evaluation of in-the-wild AV object reconstruction under realistic viewpoint bias and sensor noise.
10
+
11
+ Unlike curated object datasets with dense coverage, this benchmark reflects the sparse and imperfect observation regime found in real driving logs. Objects are often seen from only one or a few views, with heavy occlusion, motion blur, noisy calibration, rolling-shutter effects, and imperfect geometric alignment.
12
+
13
+ Each sample is organized under a semantic object category and a sample identifier, and includes object-centric RGB crops, foreground masks, and camera metadata. The benchmark is distributed in two complementary parts:
14
+
15
+ - `Part_A`: a held-out-view evaluation split with `input_views/` and `reserved_views/`
16
+ - `Part_B`: a harder no-ground-truth split with `input_views/` only
17
+
18
+ ### Supported categories
19
+
20
+ - `commercial_vehicles`
21
+ - `consumer_vehicles`
22
+ - `other_objects`
23
+ - `VRU_pedestrians`
24
+ - `VRU_riders`
25
+
26
+ ## Dataset Structure
27
+
28
+ ### Part_A: held-out-view evaluation
29
+
30
+ `Part_A` provides `input_views/` together with `reserved_views/` that are not used as model input. These reserved views act as held-out reference targets for quantitative evaluation.
31
+
32
+ Each `Part_A` sample contains:
33
+
34
+ - `input_views/`
35
+ - `reserved_views/`
36
+ - per-view `frame_XX.jpeg`
37
+ - per-view `mask_XX.png`
38
+ - `camera.json`
39
+
40
+ ### Part_B: hard no-ground-truth split
41
+
42
+ `Part_B` is intentionally more challenging. It contains stronger motion blur, heavier occlusion, and narrower view coverage. No reserved reference views are provided, so this split is intended for harder qualitative or perceptual evaluation settings.
43
+
44
+ Each `Part_B` sample contains:
45
+
46
+ - `input_views/`
47
+ - per-view `frame_XX.jpeg`
48
+ - per-view `mask_XX.png`
49
+ - `camera.json`
50
+
51
+ ### Dataset summary
52
+
53
+ - Total samples: `3716`
54
+ - `Part_A`: `2206` samples
55
+ - `Part_B`: `1510` samples
56
+
57
+ ### Split composition
58
+
59
+ **Part_A**
60
+
61
+ - `commercial_vehicles`: `308`
62
+ - `consumer_vehicles`: `1472`
63
+ - `other_objects`: `55`
64
+ - `VRU_pedestrians`: `330`
65
+ - `VRU_riders`: `41`
66
+
67
+ **Part_B**
68
+
69
+ - `commercial_vehicles`: `405`
70
+ - `consumer_vehicles`: `602`
71
+ - `other_objects`: `90`
72
+ - `VRU_pedestrians`: `383`
73
+ - `VRU_riders`: `30`
74
+
75
+ ### Creation date
76
+
77
+ - Dataset creation date: `2026-03-25`
78
+
79
+ ## Reference
80
+
81
+ - [Asset Harvester](https://github.com/NVIDIA/asset-harvester/blob/main/README.md)