Papers
arxiv:2210.12878

IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes

Published on Oct 23, 2022
Authors:
,
,
,
,

Abstract

Autonomous driving and assistance systems rely on annotated data from traffic and road scenarios to model and learn the various object relations in complex real-world scenarios. Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios and adapt to different situations. Currently, existing datasets, while large-scale, lack such diversities and are geographically biased towards mainly developed cities. An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models due to the sheer degree of variations in the object types, densities, and locations. To facilitate better research toward accommodating such scenarios, we build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames across various traffic scenarios. We discuss the need for this dataset through statistical comparisons with existing datasets and highlight benchmarks on standard 3D object detection and tracking tasks in complex layouts. Code and data available at https://github.com/shubham1810/idd3d_kit.git

Community

Proposes IDD-3D dataset: camera and LiDAR annotated (3D bounding box) data from unstructured driving scenes (in India) - to explore complex environments. Instance label for LiDAR has vehicle type, pedestrian, and rider types (motorcycle, scooter); also has (non-trivial) additional annotated classes (bicycle, animal, etc.). Capture platform has six cameras and one Ouster OS1 LiDAR. Data annotated using Sustech Points. High annotation density for close points. Benchmarked Second, CenterPoint, and PointPillars for 3D object detection (CenterPoint performs better) - AP on different categories. Also benchmarks object tracking (ground truth instance IDs available); AMOTA and AMOTP on CenterPoint detections (using SimpleTrack 3D multi-object tracking). Supplementary material has trajectory visualizations, data platform details, additional (tracking) results, dataset distance distribution visualization, distance-wise 3D object detection results. From IIIT-H, UCSD.

Links: website, Supplementary material, SimpleTrack 3D MOT, GitHub (SUSTechPOINTS annotation tool)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2210.12878 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2210.12878 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2210.12878 in a Space README.md to link it from this page.

Collections including this paper 1