Papers
arxiv:2204.07974

Anomaly Detection in Autonomous Driving: A Survey

Published on Apr 17, 2022
Authors:
,
,

Abstract

Nowadays, there are outstanding strides towards a future with autonomous vehicles on our roads. While the perception of autonomous vehicles performs well under closed-set conditions, they still struggle to handle the unexpected. This survey provides an extensive overview of anomaly detection techniques based on camera, lidar, radar, multimodal and abstract object level data. We provide a systematization including detection approach, corner case level, ability for an online application, and further attributes. We outline the state-of-the-art and point out current research gaps.

Community

  • A survey on anomaly detection techniques based on camera, LiDAR, and multimodal data for autonomous driving; natural, external corner cases. Classification/nomenclature: Confidence score techniques use trained networks (don’t interfere in training) - Bayesian, learned score, and post-processing; reconstructive approaches get normality and use deviation; generative approaches are like reconstructive, but have a discriminator; feature extraction methods determine label; prediction-based techniques predict future frames under normality.
    • Table 1 has overview for anomaly detection on camera data. Confidence score: most use SemSeg/SegNet with Mask-R-CNN. Reconstructive methods: JSR-Net has joint segmentation (scene); most do (complete scene or partial object/patch wise) image reconstruction with anomaly detection. Generative methods like normalizing flow and GANs create anomaly map. Feature extraction methods use clustering (like k-means over SIFT) and one-class SVM classifier.
    • Table 2 has overview for anomaly detection on LiDAR data; cluster of points or large and constant shift in appearance is anomalous. Learned confidence approach get a confidence score for readings (to classify if anomaly); DeepSAD projects LiDAR to reflectance image and latent space of anomaly (rain) and normal are contrastive. VoxelNet, PointRCNN, PointNet++, etc. are closed set; open-set detectors like Open-Set Instance Segmentation (OSIS) learn category agnostic embedding space (in BEV) which are clustered using DBSCAN. Metric Learning with Unsupervised Clustering (MLUC) uses Euclidean Distance Sum (EDS). Reconstructive algorithms (like VAE) construct normal maps/views and find difference.
    • Table 3 has overview for anomaly detection on RADAR data; ghost targets (caused by reflective surfaces). Has feature extraction (from some backbone) or reconstructive approaches; mathematical, DL, ML, and feature engineering method sets. Different grouping techniques over PointNet++, or learning ghost classes over PointNet++ features. Feature engineering techniques to incorporate motion data with occupancy information to label anomalies; SVM, kNN, RF techniques. Reconstructive techniques take occupancy grid and moving detection map to predict future maps (ghost detections are things out of prediction).
    • Table 4 has anomalies for multimodal data. Multimodal transformers for LiDAR and RADAR data, for detecting ghost targets. SemSeg on RGB-D data. MergeNet mixes features from StripeNet (detect small object level anomalies in RGB and depth) and ContextNet (high/scene-level information) using a RefinerNet (merge/refine complementary features); can discriminate between road, off-road, and small obstacle. MLP over latent representations of a supervised VAE.
    • Table 5 has anomaly detection on scenario level (beyond specific sensor modalities) for things like abnormal driving of non-ego vehicles. Probabilistic occupancy grid from HMM models over conditional MC dense occupancy tracker (CMCDOT). Reconstruction and deviation from adversarial auto-encoder; object deviations as anomalous, novel, and risky scenarios. UNet to predict next frame from past frames, FlowNet for optical flow data (temporal information incorporated), PSNR of real and predicted for anomaly. Behavior prediction of other agents through learning trajectories.

Daniel Bogdoll’s review on anomaly detection for autonomous driving. From KIT (Karlsruhe, Germany).

Links: GitHub - daniel-bogdoll/phd

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2204.07974 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2204.07974 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2204.07974 in a Space README.md to link it from this page.

Collections including this paper 1