Edit model card

Abstract: Autonomous driving when applied for high-speed racing aside from urban environments presents challenges in scene understanding due to rapid changes in the track environment. Traditional sequential network approaches might struggle to keep up with the real-time knowledge and decision-making demands of an autonomous agent which covers large displacements in a short time. This paper proposes a novel baseline architecture for developing sophisticated models with the ability of true hardware-enabled parallelism to achieve neural processing speeds to mirror the agent’s high velocity. The proposed model, named Parallel Perception Network (PPN) consists of two independent neural networks, a segmentation and a reconstruction network running in parallel on separate accelerated hardware. The model takes raw 3D point cloud data from the LiDAR sensor as input and converts them into a 2D Bird’s Eye View Map on both devices. Each network extracts its input features along space and time dimensions independently and produces outputs in parallel. Our model is trained on a system with 2 NVIDIA T4 GPUs with a combination of loss functions including edge preservation, and shows a 1.8x speed up in model inference time compared to a sequential configuration. Implementation is available at: https://github.com/suwesh/Parallel-Perception-Network.

Downloads last month
0
Unable to determine this model's library. Check the docs .