RedTachyon
commited on
Commit
•
84c4a45
1
Parent(s):
36f0c94
Upload folder using huggingface_hub
Browse files- XL1N6iLr0G/15_image_0.png +3 -0
- XL1N6iLr0G/15_image_1.png +3 -0
- XL1N6iLr0G/16_image_0.png +3 -0
- XL1N6iLr0G/16_image_1.png +3 -0
- XL1N6iLr0G/17_image_0.png +3 -0
- XL1N6iLr0G/1_image_0.png +3 -0
- XL1N6iLr0G/3_image_0.png +3 -0
- XL1N6iLr0G/4_image_0.png +3 -0
- XL1N6iLr0G/9_image_0.png +3 -0
- XL1N6iLr0G/9_image_1.png +3 -0
- XL1N6iLr0G/XL1N6iLr0G.md +494 -0
- XL1N6iLr0G/XL1N6iLr0G_meta.json +25 -0
XL1N6iLr0G/15_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/15_image_1.png
ADDED
Git LFS Details
|
XL1N6iLr0G/16_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/16_image_1.png
ADDED
Git LFS Details
|
XL1N6iLr0G/17_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/1_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/3_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/4_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/9_image_0.png
ADDED
Git LFS Details
|
XL1N6iLr0G/9_image_1.png
ADDED
Git LFS Details
|
XL1N6iLr0G/XL1N6iLr0G.md
ADDED
@@ -0,0 +1,494 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# An Attribute-Based Method For Video Anomaly Detection
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Video anomaly detection (VAD) identifies suspicious events in videos, which is critical for crime prevention and homeland security. In this paper, we propose a simple but highly effective VAD method that relies on attribute-based representations. The base version of our method represents every object by its velocity and pose, and computes anomaly scores by density estimation. Surprisingly, this simple representation is sufficient to achieve state-ofthe-art performance in ShanghaiTech, the most commonly used VAD dataset. Combining our attribute-based representations with an off-the-shelf, pretrained deep representation yields state-of-the-art performance with a 99.1%, 93.7%, and 85.9% AUROC on Ped2, Avenue, and ShanghaiTech, respectively.
|
8 |
+
|
9 |
+
## 1 Introduction
|
10 |
+
|
11 |
+
Video anomaly detection (VAD) aims to discover interesting but rare events in video. The task has attracted much interest as it is critical for crime prevention and homeland security. One-class classification (OCC) is one of the most popular VAD settings, where the training set consists of normal videos only, while at test time the trained model needs to distinguish between normal and anomalous events. The key challenge for learning-based VAD is that classifying an event as normal or anomalous depends on the human operator's particular definition of normality. Differently from supervised learning, there are no training examples of anomalous events, this essentially requires the learning algorithms to have strong priors. Many previous VAD methods use a combination of deep networks with self-supervised objectives. E.g., a popular line of work trains a neural network to predict the next frame, and classify the frame as anomalous if the predicted and observed frames differ significantly. While such methods can achieve good performance, they do not make their priors explicit i.e., it is unclear why they consider some frames more anomalous than others, making them difficult to debug and improve. More recent methods involve a preliminary object extraction stage from video frames, implying that objects are significant for VAD. However, they typically use object-level self-supervised approaches that do not make their priors explicit. Our hypothesis is that making priors more explicit will improve VAD performance. In this paper, we propose a new approach that directly represents each video frame by simple attributes that are semantically meaningful to humans. Our method extracts objects from every frame, and represents each object by two attributes: its velocity and body pose (in the case the object is human). These attributes are well known to be important for VAD (Markovitz et al., 2020; Georgescu et al., 2021a). We detect anomalous values of these representations by using density estimation. Concretely, our method classifies a frame as anomalous if it contains one or more objects that have unusual values of velocity and/or pose (see Fig. 1).
|
12 |
+
|
13 |
+
Our simple velocity and pose representations achieve state-of-the-art performance (85.9% AUROC) on the most popular VAD dataset, ShanghaiTech.
|
14 |
+
|
15 |
+
While the velocity and pose representations are highly effective, they ignore other attributes, most importantly object category. E.g., if we never see a lion in normal videos, a test video containing a lion is anomalous. To represent these *residual* attributes, we model them with an off-the-shelf, deep representation
|
16 |
+
(here, we use CLIP features). Our final method combines velocity, pose and the deep representations. It achieves state-of-the-art performance on the three most commonly reported datasets.
|
17 |
+
|
18 |
+
![1_image_0.png](1_image_0.png)
|
19 |
+
|
20 |
+
Figure 1: **The Avenue and ShanghaiTech datasets.** We present the most normal and anomalous frames for each feature. For anomalous frames, we visualize the bounding box of the object with the highest anomaly score. Best viewed in color.
|
21 |
+
|
22 |
+
## 2 Related Work
|
23 |
+
|
24 |
+
Classical video anomaly detection methods were typically composed of two steps: handcrafted feature extraction and anomaly scoring. Some of the manual features that were extracted were: optical flow histograms
|
25 |
+
(Chaudhry et al., 2009; Colque et al., 2016; Perš et al., 2010) and SIFT (Lowe, 2004). Commonly used scoring methods include: density estimation (Eskin et al., 2002; Glodek et al., 2013; Latecki et al., 2007),
|
26 |
+
reconstruction (Jolliffe, 2011), and one-class classification (Scholkopf et al., 2000).
|
27 |
+
|
28 |
+
In recent years, deep learning has gained in popularity as an alternative to these early works. The majority of video anomaly detection methods utilize at least one of three paradigms: reconstruction-based, predictionbased, skeletal-based, or auxiliary classification-based methods.
|
29 |
+
|
30 |
+
Reconstruction & prediction based methods. In the reconstruction paradigm, the normal training data is typically characterized by an autoencoder, which is then used to reconstruct input video clips. The assumption is that a model trained solely on normal training clips will not be able to reconstruct anomalous frames. This assumption does not always hold true, as neural networks can often generalize to some extent out-of-distribution. Notable works are (Nguyen & Meunier, 2019; Chang et al., 2020; Hasan et al., 2016b; Luo et al., 2017b; Yu et al., 2020; Park et al., 2020).
|
31 |
+
|
32 |
+
Prediction-based methods learn to predict frames or flow maps in video clips, including inpainting intermediate frames, predicting future frames, and predicting human trajectories (Liu et al., 2018a; Feng et al.,
|
33 |
+
2021b; Chen et al., 2020; Lee et al., 2019; Lu et al., 2019; Park et al., 2020; Wang et al., 2021; Feng et al.,
|
34 |
+
2021a; Yu et al., 2020). Additionally, some works take a hybrid approach combining the two paradigms (Liu et al., 2021b; Zhao et al., 2017; Ye et al., 2019; Tang et al., 2020; Morais et al., 2019). As these methods are trained to optimize both objectives, input frames with large reconstruction or prediction errors are considered anomalous. Self-supervised auxiliary tasks. There has been a great deal of research on learning from unlabeled data.
|
35 |
+
|
36 |
+
A common approach is to train neural networks on suitably designed auxiliary tasks with automatically generated labels. Tasks include: video frame prediction (Mathieu et al., 2016), image colorization (Zhang et al., 2016; Larsson et al., 2016), puzzle solving (Noroozi & Favaro, 2016), rotation prediction (Gidaris et al.,
|
37 |
+
2018), arrow of time (Wei et al., 2018), predicting playback velocity (Doersch et al., 2015), and verifying frame order (Misra et al., 2016). Many video anomaly detection methods use self-supervised learning. In fact, self-supervised learning is a key component in the majority of reconstruction-based and prediction-based methods. SSMTL (Georgescu et al., 2021a) trains a CNN jointly on three auxiliary tasks: arrow of time, motion irregularity, and middle-box prediction, in addition to knowledge distillation. Jigsaw-Puzzle (Wang et al., 2022) trains neural networks to solve spatio-temporal jigsaw puzzles. The networks are then used for VAD.
|
38 |
+
|
39 |
+
Skeletal methods. Such methods rely on a pose tracker to extract the skeleton trajectories of each person in the video. Anomalies are then detected using the skeleton trajectory data. Our attribute-based method outperforms previous skeletal methods (e.g., (Markovitz et al., 2020; Rodrigues et al., 2020; Yu et al., 2021; Sun & Gong, 2023)) by a large margin. In concurrent work, (Hirschorn & Avidan, 2023) proposed a skeletalbased method that achieved comparable results to ours on the ShanghaiTech dataset but was outperformed by large margins on the rest of the evaluated benchmarks. Different from skeletal approaches, our method does not require pose tracking, which is extremely challenging in crowded scenes. Our pose features only use a single frame, while our velocity features only require a pair of frames. In contrast, skeletal approaches require pose tracking across many frames, which is expensive and error-prone. It is also important to note that skeletal features by themselves are ineffective in detecting non-human anomalies, therefore, being insufficient for providing a complete VAD solution.
|
40 |
+
|
41 |
+
Object-level video anomaly detection. Early methods, both classical and deep learning, operated on entire video frames. This proved difficult for VAD as frames contain many variations, as well as a large number of objects. More recent methods (Georgescu et al., 2021a; Liu et al., 2021b; Wang et al., 2022)
|
42 |
+
operate at the object level by first extracting object bounding boxes using off-the-shelf object detectors.
|
43 |
+
|
44 |
+
Then, they detect if each object is anomalous. This is an easier task, as objects contain much less variation than whole frames. Object-based methods yield significantly better results than frame-level methods.
|
45 |
+
|
46 |
+
It is often believed that due to the complexity of realistic scenes and the variety of behaviors, it is difficult to craft features that will discriminate between them. As object detection was inaccurate prior to deep learning, classical methods were previously applied at the frame level rather than at the object level, and therefore underperformed on standard benchmarks. We break this misconception and demonstrates that it is possible to craft semantic features that are accurate.
|
47 |
+
|
48 |
+
## 3 Method 3.1 Preliminaries
|
49 |
+
|
50 |
+
Our method assumes a training set consisting of Nc video clips {c1, c2*...c*Nc
|
51 |
+
} ∈ X*train* that are all normal
|
52 |
+
(i.e., do not contain any anomalies). Each clip ciis comprised of Ni frames, ci = [fi,1, fi,2, ...fi,Ni
|
53 |
+
]. The goal is to classify each frame f ∈ c in an inference clip c as normal or anomalous. Our method represents each frame f as ϕ(f) ∈ R
|
54 |
+
d, where d ∈ N is the feature dimension. We compute the anomaly score of frame f using an anomaly scoring function s(ϕ(f)), and classify it as anomalous if s(ϕ(f)) exceeds a threshold.
|
55 |
+
|
56 |
+
## 3.2 Overview
|
57 |
+
|
58 |
+
We propose a method that represents each video frame as a set of objects, with each object characterized by its attributes. This contrasts with most previous methods that do not explicitly represent attributes.
|
59 |
+
|
60 |
+
Specifically, we focus on two key attributes: velocity and body pose. Object velocity is probably the most important attribute as it can detect if an object is moving unusually fast e.g., running away, or in a strange direction e.g., against the direction of traffic. Since determining true velocity requires 3D understanding, we represent it using optical flow. Similarly, human body poses can reveal anomalous activities, such as throwing an item. Instead of using the full 3D human pose, we represent it with 2D landmarks, which are easier to detect. Both attributes have been used in previous VAD studies, such as Georgescu et al. (2021a) and Markovitz et al. (2020), as part of more complex approaches.
|
61 |
+
|
62 |
+
![3_image_0.png](3_image_0.png)
|
63 |
+
|
64 |
+
Figure 2: **An overview of our method.** We first extract optical flow maps and bounding boxes for all of the objects in the frame. We then crop each object from the original image and its corresponding flow map.
|
65 |
+
|
66 |
+
Our representation consists of velocity, pose, and deep (CLIP) features.
|
67 |
+
We compute the anomaly score based on density estimation of object-level feature descriptors. This is done in three stages: pre-processing, feature extraction, and density estimation. In the pre-processing stage, our method (i) uses an off-the-shelf motion estimator to estimate the optical flow for each frame; (ii) localizes and classifies the bounding boxes of all objects within a frame using an off-the-shelf object detector. The outputs of these models are used to extract object-level velocity, pose, and deep representations (see Sec. 3.4).
|
68 |
+
|
69 |
+
Finally, our method uses density estimation to calculate the anomaly score of each test frame. See Fig. 2 for an illustration.
|
70 |
+
|
71 |
+
## 3.3 Pre-Processing
|
72 |
+
|
73 |
+
Our method extracts velocity and pose from each object in each video frame. To do so, we compute optical flow and body landmarks for each object in the video.
|
74 |
+
|
75 |
+
Optical flow. Our method uses optical flow as a proxy for its velocity. We extract the optical flow map o for each frame f ∈ c in every video clip c using an off-the-shelf optical flow model.
|
76 |
+
|
77 |
+
Object detection. Our method models frames by representing every object individually. This follows many recent papers, e.g., (Georgescu et al., 2021a; Liu et al., 2021b; Wang et al., 2022) that found that object-based representations are more effective than global, frame-level representations. We first detect all objects in each frame using an off-the-shelf object detector. Formally, our object detection generates a set of m bounding boxes bb1, bb2*...bb*m for each frame, with corresponding category labels y1, y2*, ..., y*m.
|
78 |
+
|
79 |
+
## 3.4 Feature Extraction
|
80 |
+
|
81 |
+
Our method represents each object by two semantic attributes, velocity and pose, and by an implicit, deep representation.
|
82 |
+
|
83 |
+
Velocity features. Our working hypothesis is that an unusual speed or direction of motion can often identify anomalies in video. As objects can move in both x and y axes and both the magnitude (speed) and orientation of the velocity may be anomalous, we compute velocity features for each object in each
|
84 |
+
|
85 |
+
![4_image_0.png](4_image_0.png)
|
86 |
+
|
87 |
+
Figure 3: **An illustration of our velocity feature vector.** *Left:* We quantize the orientations into B = 8 equi-spaced bins, and assign each optical flow vector in the object's bounding box is to a single bin. *Right:*
|
88 |
+
The value of each bin is the average magnitude of the optical flow vectors assigned to this bin. Best viewed in color.
|
89 |
+
frame. We begin by cropping the frame-level optical flow map o by the bounding box bb of each detected object. Following this step, we obtain a set of cropped object flow maps (see Fig. 2). We rescale these flow maps to a fixed size of Hf low × W*f low*. Next, we represent each flow map with the average motion for each orientation, where orientations are quantized into B ∈ N equi-spaced bins, following classical optical flow representations e.g., (Chaudhry et al., 2009). The final representation is a B-dimensional vector consisting of the average flow magnitude of the flow vectors in each bin (see Fig. 3). This representation is capable of describing motion in both radial and tangential directions. We denote our velocity feature extractor as:
|
90 |
+
ϕvelocity : Hf low × W*f low* → R
|
91 |
+
B.
|
92 |
+
|
93 |
+
Pose features. Most anomalous events in video involve humans, so we include activity features in our representation. While a full understanding of activity requires temporal features, we find that human body pose from a single frame can detect many unusual activities. Even though human pose is essentially a 3D feature, 2D body landmark positions already provide much of the signal. We compute pose features for each human object bb using an off-the-shelf 2D keypoint extractor that outputs the pixel coordinates of each landmark position, denoted by ϕˆ*pose*(bb) ∈ R
|
94 |
+
2×d, where d ∈ N is the number of keypoints. To ensure invariance to the human's position and size, we normalize the keypoints. First, we subtract the coordinates of the top-left corner of the object bounding box from each landmark. We then scale the x and y axes so that the object bounding box has a fixed size of Hpose × W*pose*, where H*pose* and W*pose* are constants. Formally, let l ∈ R
|
95 |
+
2 be the top-left corner of the human bounding box. The pose descriptor becomes:
|
96 |
+
|
97 |
+
$$\phi_{p o s e}(b b)=\left(\begin{array}{c c}{{H_{p o s e}}}&{{0}}\\ {{\overline{{{h e g h t(b b)}}}}}&{{\overline{{{W_{p o s e}}}}}}\\ {{0}}&{{\overline{{{w t d t h(b b)}}}}}\end{array}\right)(\hat{\phi}_{p o s e}(b b)-l)$$
|
98 |
+
$$\left(1\right)$$
|
99 |
+
|
100 |
+
Here, *height*(bb) and *width*(bb) are the height and width of the object bounding box bb, respectively and l ∈ R
|
101 |
+
2 be the top-left corner of bb. Finally, we flatten ϕ*pose* to obtain the final pose feature vector.
|
102 |
+
|
103 |
+
Deep features. While velocity is the most discriminative attribute, other attributes beyond pose may also matter. Deep features implicitly bundle together many different attributes. Hence, we use them to model the residual attributes which are not described by velocity and pose. We follow previous image anomaly detection work (Reiss et al., 2021; Reiss & Hoshen, 2023), that used generic, pretrained encoders to implicitly represent image attributes. Concretely, we use a pretrained CLIP encoder (Radford et al., 2021),
|
104 |
+
ϕ*deep*(.), to represent the bounding box of each object in each frame. Note that CLIP representations do not achieve competitiveness on their own; in fact, they perform much worse than the velocity representations (see Tab. 2). However, together, velocity, pose, and CLIP features represent video sufficiently well to outperform the state-of-the-art.
|
105 |
+
|
106 |
+
## 3.5 Density Estimation
|
107 |
+
|
108 |
+
We use density estimation for scoring samples as normal or anomalous, where a low estimated density indicates anomaly. To estimate density, we fit a separate estimator for each feature. As velocity features are low-dimensional, we use a Gaussian mixture (GMM) estimator. As our pose and deep features are highdimensional, we estimate their density using kNN. I.e., we compute the L2 distance between the feature x of a target object and the nearest k exemplars in the corresponding training feature set. We compare different exemplar selection methods is in Sec. 4.5. We denote our density estimators by svelocity(.), spose(.), s*deep*(.).
|
109 |
+
|
110 |
+
Score calibration. Combining the three density estimators requires calibration. To do so, we estimate the distribution of anomaly scores on the normal training set. We then scale the scores using min-max normalization. The kNN used for scoring pose and deep features present a subtle point. When computing kNN on the training set, the exemplars must not be taken from the same clip as the target object. The reason is that the same object appears in nearby frames with virtually no variation, distorting kNN estimates.
|
111 |
+
|
112 |
+
Instead, we compute the kNN between each training set object and all objects in the other video clips provided in the training set. We can now define ∀f ∈ {*velocity, pose, deep*}: µf = maxx{sf (ϕf (x))} and νf = minx{sf (ϕf (x))}.
|
113 |
+
|
114 |
+
## 3.6 Inference
|
115 |
+
|
116 |
+
We extract a representation for each object bb in each frame f in each inference clip, as describe above. We then compute an anomaly score for each attribute feature of each object bb. The score for every frame is simply the maximum score across all objects. The final anomaly score is the sum of the individual feature scores normalized by our calibration parameters:
|
117 |
+
|
118 |
+
$$s(f)=\max_{k}\{\frac{s(\phi_{velocity}(b\mu_{k}))-\nu_{velocity}}{\mu_{velocity}-\nu_{velocity}}\}+\max_{k}\{\frac{s(\phi_{vmax}(b\mu_{k}))-\nu_{vmax}}{\mu_{vmax}-\nu_{vmax}}\}+\max_{k}\{\frac{s(\phi_{dvexp}(b\mu_{k}))-\nu_{dvexp}}{\mu_{dvexp}-\nu_{dvexp}}\}\tag{2}$$
|
119 |
+
|
120 |
+
As anomalous events span multiple frames, we smooth the frame scores using a temporal smoothing filter.
|
121 |
+
|
122 |
+
## 4 Experiments 4.1 Datasets
|
123 |
+
|
124 |
+
We evaluated our method on three publicly available VAD datasets, using their training and test splits. Only test videos included anomalous events. UCSD Ped2. This dataset (Mahadevan et al., 2010) contains 16 normal training videos and 12 test videos at a 240 × 360 pixel resolution. Videos show a fixed scene with a camera above the scene and pointed downward. The training video clips contain only normal behavior of pedestrians walking, while examples of abnormal events are bikers, skateboarding, and cars. CUHK Avenue. This dataset (Lu et al., 2013) contains 16 normal training videos and 21 test videos at 360 × 640 pixel resolution. Videos show a fixed scene using a ground-level camera. Training video clips contain only normal behavior. Examples of abnormal events are strange activities (e.g. throwing objects, loitering, and running), movement in the wrong direction, and abnormal objects. ShanghaiTech Campus. This dataset (Liu et al., 2018a) is the largest publicly available dataset for VAD.
|
125 |
+
|
126 |
+
There are 330 training videos and 107 test videos from 13 different scenes at 480 × 856 pixel resolution.
|
127 |
+
|
128 |
+
ShanghaiTech contains video clips with complex light conditions and camera angles, making this dataset more challenging than the other two. Anomalies include robberies, jumping, fights, car invasions, and bike riding in pedestrian areas.
|
129 |
+
|
130 |
+
## 4.2 Implementation Details
|
131 |
+
|
132 |
+
We use ResNet50 Mask-RCNN (He et al., 2017) pretrained on MS-COCO (Lin et al., 2014) to extract object bounding boxes. To filter out low confidence objects, we follow the same configurations as in (Georgescu et al., 2021a). Specifically for Ped2, Avenue, and ShanghaiTech, we set confidence thresholds of 0.5, 0.8, and 0.8. In order to generate optical flow maps, we use FlowNet2 (Ilg et al., 2017). For our landmark detection, we use AlphaPose (Fang et al., 2017) pretrained on MS-COCO with d = 17 keypoints. We use a pretrained
|
133 |
+
|
134 |
+
| underlined, respectively. Year Method | Ped2 | Avenue | ShanghaiTech | | | | |
|
135 |
+
|-----------------------------------------|---------------------------------|----------|----------------|-------|-------|------|------|
|
136 |
+
| Micro | Macro | Micro | Macro | Micro | Macro | | |
|
137 |
+
| ≤ 2019 | HOOF (Chaudhry et al., 2009) | 61.1 | - | - | - | - | - |
|
138 |
+
| HOFM (Colque et al., 2016) | 89.9 | - | - | - | - | - | |
|
139 |
+
| SCL (Lu et al., 2013) | - | - | 80.9 | - | - | - | |
|
140 |
+
| Conv-AE (Hasan et al., 2016a) | 90.0 | - | 70.2 | - | - | - | |
|
141 |
+
| StackRNN (Luo et al., 2017a) | 92.2 | - | 81.7 | - | 68.0 | - | |
|
142 |
+
| STAN (Lee et al., 2018) | 96.5 | - | 87.2 | - | - | - | |
|
143 |
+
| MC2ST (Liu et al., 2018b) | 87.5 | - | 84.4 | - | - | - | |
|
144 |
+
| Frame-Pred. (Liu et al., 2018a) | 95.4 | - | 85.1 | - | 72.8 | - | |
|
145 |
+
| Mem-AE. (Gong et al., 2019) | 94.1 | - | 83.3 | - | 71.2 | - | |
|
146 |
+
| CAE-SVM (Ionescu et al., 2019) | 94.3 | 97.8 | 87.4 | 90.4 | 78.7 | 84.9 | |
|
147 |
+
| BMAN (Lee et al., 2019) | 96.6 | - | 90.0 | - | 76.2 | - | |
|
148 |
+
| AM-Corr (Nguyen & Meunier, 2019) | 96.2 | - | 86.9 | - | - | - | |
|
149 |
+
| 2020 | MNAD-Recon. (Park et al., 2020) | 97.0 | - | 88.5 | - | 70.5 | - |
|
150 |
+
| CAC (Wang et al., 2020) | - | - | 87.0 | - | 79.3 | - | |
|
151 |
+
| Scene-Aware (Sun et al., 2020) | - | - | 89.6 | - | 74.7 | - | |
|
152 |
+
| VEC (Yu et al., 2020) | 97.3 | - | 90.2 | - | 74.8 | - | |
|
153 |
+
| ClusterAE (Chang et al., 2020) | 96.5 | - | 86.0 | - | 73.3 | - | |
|
154 |
+
| 2021 | AMMCN (Cai et al., 2021) | 96.6 | - | 86.6 | - | 73.7 | - |
|
155 |
+
| SSMTL (Georgescu et al., 2021a) | 97.5 | 99.8 | 91.5 | 91.9 | 82.4 | 89.3 | |
|
156 |
+
| MPN (Lv et al., 2021) | 96.9 | - | 89.5 | - | 73.8 | - | |
|
157 |
+
| HF2 (Liu et al., 2021a) | 99.3 | - | 91.1 | 93.5 | 76.2 | - | |
|
158 |
+
| CT-D2GAN (Feng et al., 2021a) | 97.2 | - | 85.9 | - | 77.7 | - | |
|
159 |
+
| BA-AED (Georgescu et al., 2021b) | 98.7 | 99.7 | 92.3 | 90.4 | 82.7 | 89.3 | |
|
160 |
+
| SSPCAB (Ristea et al., 2022) | - | - | 92.9 | 91.9 | 83.6 | 89.5 | |
|
161 |
+
| 2022 | DLAN-AC (Yang et al., 2022) | 97.6 | - | 89.9 | - | 74.7 | - |
|
162 |
+
| Jigsaw-Puzzle (Wang et al., 2022) | 99.0 | 99.9 | 92.2 | 93.0 | 84.3 | 89.8 | |
|
163 |
+
| 2023 | USTN-DSC (Yang et al., 2023) | 98.1 | - | 89.9 | - | 73.8 | - |
|
164 |
+
| EVAL (Singh et al., 2023) | - | - | 86.0 | - | 76.6 | - | |
|
165 |
+
| FB-SAE (Cao et al., 2023) | - | - | 86.8 | - | 79.2 | - | |
|
166 |
+
| FPDM (Yan et al., 2023) | - | - | 90.1 | - | 78.6 | - | |
|
167 |
+
| LMPT (Shi et al., 2023) | 97.6 | - | 90.9 | - | 78.8 | - | |
|
168 |
+
| STF-NF (Hirschorn & Avidan, 2023) | 93.1 | - | 60.1 | - | 85.9 | - | |
|
169 |
+
| 2024 | Ours | 99.1 | 99.9 | 93.7 | 96.3 | 85.9 | 89.6 |
|
170 |
+
|
171 |
+
Table 1: **Frame-level AUROC (%) comparison.** The best and second-best results are bolded and underlined, respectively.
|
172 |
+
ViT B-16 CLIP (Dosovitskiy et al., 2020; Radford et al., 2021) image encoder as our deep feature extractor.
|
173 |
+
|
174 |
+
Our method is built around the extracted objects and flow maps. We use Hvelocity × W*velocity* = 224 × 224 to rescale flow maps. As for Hpose × W*pose* rescaling, we calculate the average height and width from the bounding boxes of the train set and use those values. The lower resolution of Ped2 prevents objects from filling a histogram, and to extract pose representations, therefore we use B = 1 orientations and rely solely on velocity and deep representations. We use B = 8 orientations for Avenue and ShanghaiTech. When testing, for anomaly scoring we use kNN for the pose and deep representations with k = 1 nearest neighbors. For velocity, we use GMM with n = 5 Gaussians. Finally, the anomaly score of a frame represents the maximum score among all the objects within that frame.
|
175 |
+
|
176 |
+
## 4.3 Evaluation Metrics
|
177 |
+
|
178 |
+
Our study uses standard VAD evaluation metrics. We vary the threshold over the anomaly scores to measure the frame-level Area Under the Receiver Operation Characteristic (AUROC) with respect to the ground-truth annotations. We report two types of AUROC: (i) micro-averaged AUROC, which computes the score by on 7 all frames from all videos; (ii) macro-averaged AUROC, which computes the AUROC score individually for each video and then averages the scores of all videos. Most existing studies report micro-averaged AUROC,
|
179 |
+
while only a few report macro-averaged AUROC.
|
180 |
+
|
181 |
+
## 4.4 Quantitative Results
|
182 |
+
|
183 |
+
We compare our method and the state-of-the-art from recent years in Tab. 1. We took the performance numbers of the baseline methods directly from the original papers. Ped2 results. Most methods obtained over 94% on Ped2, indicating that of the three public datasets, it is the simplest. While our method is comparable to the current state-of-the-art method (HF2(Liu et al.,
|
184 |
+
2021b)) in terms of performance, the near-perfect results on Ped2 indicate it is practically solved. Avenue results. Our method obtained a new state-of-the-art micro-averaged AUROC of 93.7%. Our method also outperformed the current state-of-the-art in terms of macro-averaged AUROC by a considerable margin of 2.8%, reaching 96.3%.
|
185 |
+
|
186 |
+
ShanghaiTech results. Our method outperforms all previous methods on the largest dataset, ShanghaiTech, by a considerable margin. Accordingly, our method achieves 85.9% AUROC, 1.6% higher than the best performance previous methods achieved, 84.3% (Jigsaw-Puzzle (Wang et al., 2022)). We note that in concurrent work, STF-NF (Hirschorn & Avidan, 2023) achieved comparable results to ours on the ShanghaiTech dataset. Our method outperforms it by large margins on Ped2 (by 6.0% AUROC) and Avenue (by 33.6% AUROC).
|
187 |
+
|
188 |
+
To summarize, our method achieves the highest performance on the three most popular public benchmarks.
|
189 |
+
|
190 |
+
It simply consists of three simple representations and does not require training.
|
191 |
+
|
192 |
+
## 4.5 Analysis
|
193 |
+
|
194 |
+
Ablation study. We report in Tab. 2 the anomaly detection performance on the Avenue and ShanghaiTech datasets of all attribute combinations. Our findings reveal that the velocity features provide the highest frame-level AUROC on both Avenue and ShanghaiTech, with 86.0% and 84.4% micro-averaged AUROC,
|
195 |
+
respectively. In ShanghaiTech, our velocity features *on their own* are already state-of-the-art compared with all previous VAD methods. We expect this to be due to the large number of anomalies associated with speed and motion, such as running people and fast-moving objects, e.g. cars and bikes. Adding either pose or CLIP
|
196 |
+
improved performance, mostly macro-AUROC, presumably as it provided information about human activity which accounts for some of the anomalies in this dataset. Velocity features were still the most performant on Avenue. However, combining them with deep features improved performance significantly. Overall, we observe that using all three features performed the best.
|
197 |
+
|
198 |
+
Number of velocity bins. We ablated the impact of different numbers of bins (B) in our velocity features in Tab. 3. We compared AUROC scores on the Avenue and ShanghaiTech datasets. The results indicate that the choice of B influences detection accuracy. Specifically, we observed that increasing the number of bins from B = 1 to B = 8 led to consistent improvements in both micro and macro AUROC scores on both datasets. This suggests that a finer quantization of velocity orientations represents motion better and improves anomaly detection. Performance gains diminish beyond B = 8.
|
199 |
+
|
200 |
+
k**-Means as a faster alternative.** Computing kNN has linear complexity in the number of objects in the datasets, which may be slow for large datasets. We can speed it up by reducing the number of samples via k-means. In Tab. 4, we compare the performance of our method with kNN and k-means. Note that k-means still uses kNN to calculate anomaly scores as the sum of distances to nearest neighbor means. This is much faster than the original kNN as there are fewer means than the number of objects in the training set. We observe that it improves inference time with a small accuracy loss. Pose features for non-human objects. We extract pose representations exclusively for human objects and not for non-human objects. We calculate the pose anomaly score for each frame by taking the score of the object with the most anomalous pose. Non-human objects are given a pose anomaly score of −∞ and therefore do not contribute to the frame-wise pose anomaly score.
|
201 |
+
|
202 |
+
| Pose Features | Deep Features | Velocity Features | Avenue | ShanghaiTech | | |
|
203 |
+
|-----------------|-----------------|---------------------|----------|----------------|-------|------|
|
204 |
+
| | | Micro | Macro | Micro | Macro | |
|
205 |
+
| ✓ | | 73.8 | 76.2 | 74.5 | 81.0 | |
|
206 |
+
| ✓ | | 85.4 | 87.7 | 72.5 | 82.5 | |
|
207 |
+
| | ✓ | 86.0 | 89.6 | 84.4 | 84.8 | |
|
208 |
+
| ✓ | ✓ | 89.3 | 88.8 | 76.7 | 84.9 | |
|
209 |
+
| ✓ | ✓ | 93.0 | 95.5 | 84.5 | 88.7 | |
|
210 |
+
| ✓ | ✓ | 86.8 | 93.0 | 85.9 | 88.8 | |
|
211 |
+
| ✓ | ✓ | ✓ | 93.7 | 96.3 | 85.1 | 89.6 |
|
212 |
+
|
213 |
+
| Bins (B) | Avenue | ShanghaiTech | | |
|
214 |
+
|------------|----------|----------------|-------|------|
|
215 |
+
| Micro | Macro | Micro | Macro | |
|
216 |
+
| B = 1 | 83.5 | 83.5 | 81.2 | 80.9 |
|
217 |
+
| B = 2 | 84.1 | 83.8 | 82.1 | 82.7 |
|
218 |
+
| B = 4 | 85.5 | 89.2 | 84.0 | 84.6 |
|
219 |
+
| B = 8 | 86.0 | 89.6 | 84.4 | 84.8 |
|
220 |
+
| B = 16 | 84.1 | 88.4 | 83.1 | 84.2 |
|
221 |
+
|
222 |
+
Table 2: **Ablation study.** Result are in frame-level AUROC (%). The best and second-best results are in bold and underline, respectively.
|
223 |
+
|
224 |
+
Table 3: Comparison of different numbers of bins
|
225 |
+
(B) in our velocity features. Frame-level AUROC (%) comparison. Best in bold.
|
226 |
+
Table 4: Our final results when kNN is replaced by k-means. Frame-level AUROC (%) comparison. Best in bold.
|
227 |
+
|
228 |
+
| k | Avenue | ShanghaiTech | | |
|
229 |
+
|---------|----------|----------------|-------|------|
|
230 |
+
| Micro | Macro | Micro | Macro | |
|
231 |
+
| k = 1 | 91.8 | 94.0 | 84.2 | 87.2 |
|
232 |
+
| k = 5 | 92.0 | 94.2 | 84.3 | 88.1 |
|
233 |
+
| k = 10 | 92.1 | 94.5 | 84.6 | 88.1 |
|
234 |
+
| k = 100 | 92.9 | 95.2 | 84.8 | 88.6 |
|
235 |
+
| All | 93.7 | 96.3 | 85.1 | 89.6 |
|
236 |
+
|
237 |
+
Running times. We carried out all our experiments on a NVIDIA RTX 2080 GPU. Our preprocessing stage, which includes object detection and optical flow extraction, takes approximately 80 milliseconds (ms)
|
238 |
+
per frame. It takes our method approximately 5 ms to compute the velocity extraction, pose extraction, and deep features extraction stages, combined with anomaly scoring. Our method runs at 12FPS with an average of 5 objects per frame.
|
239 |
+
|
240 |
+
## 4.6 Qualitative Results
|
241 |
+
|
242 |
+
We visualize the anomaly detection process for Avenue and ShanghaiTech in Fig. 4 and Fig. 5, where we plot the anomaly scores across all frames of a video. Our anomaly scores are clearly highly correlated with anomalous events, demonstrating the effectiveness of our method.
|
243 |
+
|
244 |
+
## 5 Discussion
|
245 |
+
|
246 |
+
Exploring other semantic attributes. There are other important attributes for VAD beyond velocity and pose. Identifying other relevant attributes that correlate with anomalous events can further improve anomaly detection systems. For example, attributes related to object interactions, spatial arrangements, or temporal patterns may be very discriminative for some types of anomalies. Finding ways to systematically discover such attributes may significantly speed up research. Guidance. Finding relevant attributes for anomaly detection may require user guidance. In real-world scenarios, operators have domain knowledge about factors that lead to anomalous behavior. Efficiently incorporating this guidance, such as selecting velocity or pose features in our work, is essential for leveraging this knowledge effectively. Other academic benchmarks. While our method, using simple attributes, was effective on the three most popular VAD datasets, extending it to more complex datasets may require more work. Publicly available datasets such as UCF-Crime (Sultani et al., 2018) and XD-Violence (Wu et al., 2020), which feature a wider variety of anomalies and larger scales, present additional challenges. These datasets are essentially different
|
247 |
+
|
248 |
+
![9_image_0.png](9_image_0.png)
|
249 |
+
|
250 |
+
Figure 4: Frame-level scores and anomaly localizations for Avenue's test video 04. Best viewed in color.
|
251 |
+
|
252 |
+
Figure 5: Frame-level scores and anomaly localizations for ShanghaiTech's test video 03_0059. Best
|
253 |
+
|
254 |
+
![9_image_1.png](9_image_1.png)
|
255 |
+
|
256 |
+
viewed in color.
|
257 |
+
from the ones tested here as they contain *distinct* scenes in training and testing data, and include moving cameras, which also change the scene. So far, only weakly-supervised VAD has been successful on these datasets as they labeled anomalous data in training. The field needs new, more complex datasets within the fixed camera setting to further stress-test one-class classification VAD methods such as ours.
|
258 |
+
|
259 |
+
## 6 Ethical Considerations
|
260 |
+
|
261 |
+
While VAD offers significant potential for enhancing public safety and security, it is crucial to acknowledge and address the ethical implications of such technology. VAD systems, including our proposed method, can be used in surveillance applications, which raises important privacy concerns. The continuous monitoring of public spaces may lead to a sense of constant observation, potentially infringing on individuals' right to privacy and freedom of movement. Moreover, there is a risk that VAD systems could be misused for unauthorized tracking or profiling of individuals.
|
262 |
+
|
263 |
+
To mitigate these ethical risks, several strategies should be considered in VAD systems development and deployment. First, strict data protection protocols should be implemented to ensure that collected video data is securely stored, accessed only by authorized individuals, and deleted after a defined period. Second, VAD use should be transparent, with clear warnings informing individuals when they are entering areas under surveillance. Third, VAD systems should be designed with privacy-preserving techniques, such as immediate data anonymization or the use of low-resolution data that can detect anomalies without identifying individuals. By implementing these measures, we can work towards harnessing VAD technology benefits while respecting individual privacy and civil rights.
|
264 |
+
|
265 |
+
## 7 Conclusion
|
266 |
+
|
267 |
+
We propose a simple yet highly effective attribute-based method for video anomaly detection (VAD). Our method represents each object in each frame using velocity and pose representations and uses density estimation to compute anomaly scores. These simple representations are sufficient to achieve state-of-the-art performance on the ShanghaiTech and Ped2 datasets. By combining attribute-based representations with implicit deep representations, we achieve top VAD performance with AUROC scores of 99.1%, 93.7%, and 85.9% on Ped2, Avenue, and ShanghaiTech, respectively. Our extensive ablation study highlights the relative merits of the three representations. Overall, our method is both accurate and easy to implement.
|
268 |
+
|
269 |
+
## References
|
270 |
+
|
271 |
+
Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In *ICML*, volume 2, pp. 4, 2021.
|
272 |
+
|
273 |
+
Ruichu Cai, Hao Zhang, Wen Liu, Shenghua Gao, and Zhifeng Hao. Appearance-motion memory consistency network for video anomaly detection. In *AAAI*, 2021.
|
274 |
+
|
275 |
+
Congqi Cao, Yue Lu, Peng Wang, and Yanning Zhang. A new comprehensive benchmark for semi-supervised video anomaly detection and anticipation. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 20392–20401, 2023.
|
276 |
+
|
277 |
+
Yunpeng Chang, Zhigang Tu, Wei Xie, and Junsong Yuan. Clustering driven deep autoencoder for video anomaly detection. In *European Conference on Computer Vision*, pp. 329–345. Springer, 2020.
|
278 |
+
|
279 |
+
Rizwan Chaudhry, Avinash Ravichandran, Gregory Hager, and René Vidal. Histograms of oriented optical flow and binet-cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1932–1939. IEEE, 2009.
|
280 |
+
|
281 |
+
Dongyue Chen, Pengtao Wang, Lingyi Yue, Yuxin Zhang, and Tong Jia. Anomaly detection in surveillance video based on bidirectional prediction. *Image and Vision Computing*, 98:103915, 2020.
|
282 |
+
|
283 |
+
Rensso Victor Hugo Mora Colque, Carlos Caetano, Matheus Toledo Lustosa de Andrade, and William Robson Schwartz. Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos. *IEEE Transactions on Circuits and Systems for Video Technology*, 27(3):673–682, 2016.
|
284 |
+
|
285 |
+
Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In *Proceedings of the IEEE international conference on computer vision*, pp. 1422–1430, 2015.
|
286 |
+
|
287 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020.
|
288 |
+
|
289 |
+
Eleazar Eskin, Andrew Arnold, Michael Prerau, Leonid Portnoy, and Sal Stolfo. A geometric framework for unsupervised anomaly detection. In *Applications of data mining in computer security*, pp. 77–101.
|
290 |
+
|
291 |
+
Springer, 2002.
|
292 |
+
|
293 |
+
Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017.
|
294 |
+
|
295 |
+
Xinyang Feng, Dongjin Song, Yuncong Chen, Zhengzhang Chen, Jingchao Ni, and Haifeng Chen. Convolutional transformer based dual discriminator generative adversarial networks for video anomaly detection.
|
296 |
+
|
297 |
+
Proceedings of the 29th ACM International Conference on Multimedia, 2021a.
|
298 |
+
|
299 |
+
Xinyang Feng, Dongjin Song, Yuncong Chen, Zhengzhang Chen, Jingchao Ni, and Haifeng Chen. Convolutional transformer based dual discriminator generative adversarial networks for video anomaly detection.
|
300 |
+
|
301 |
+
In *Proceedings of the 29th ACM International Conference on Multimedia*, pp. 5546–5554, 2021b.
|
302 |
+
|
303 |
+
Mariana-Iuliana Georgescu, Antonio Bărbălău, Radu Tudor Ionescu, Fahad Shahbaz Khan, Marius Popescu, and Mubarak Shah. Anomaly detection in video via self-supervised and multi-task learning. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12737–12747, 2021a.
|
304 |
+
|
305 |
+
Mariana Iuliana Georgescu, Radu Tudor Ionescu, Fahad Shahbaz Khan, Marius Popescu, and Mubarak Shah. A background-agnostic framework with adversarial training for abnormal event detection in video.
|
306 |
+
|
307 |
+
IEEE transactions on pattern analysis and machine intelligence, 44(9):4505–4523, 2021b.
|
308 |
+
|
309 |
+
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. *arXiv preprint arXiv:1803.07728*, 2018.
|
310 |
+
|
311 |
+
Michael Glodek, Martin Schels, and Friedhelm Schwenker. Ensemble gaussian mixture models for probability density estimation. *Computational Statistics*, 28(1):127–138, 2013.
|
312 |
+
|
313 |
+
Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 1705–1714, 2019.
|
314 |
+
|
315 |
+
Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K Roy-Chowdhury, and Larry S Davis. Learning temporal regularity in video sequences. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 733–742, 2016a.
|
316 |
+
|
317 |
+
Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K Roy-Chowdhury, and Larry S Davis. Learning temporal regularity in video sequences. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 733–742, 2016b.
|
318 |
+
|
319 |
+
Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In *Proceedings of the IEEE*
|
320 |
+
international conference on computer vision, pp. 2961–2969, 2017.
|
321 |
+
|
322 |
+
Or Hirschorn and Shai Avidan. Normalizing flows for human pose anomaly detection. In *Proceedings of the* IEEE/CVF International Conference on Computer Vision (ICCV), pp. 13545–13554, October 2023.
|
323 |
+
|
324 |
+
Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pp. 2462–2470, 2017.
|
325 |
+
|
326 |
+
Radu Tudor Ionescu, Fahad Shahbaz Khan, Mariana-Iuliana Georgescu, and Ling Shao. Object-centric autoencoders and dummy anomalies for abnormal event detection in video. In *Proceedings of the IEEE/CVF*
|
327 |
+
Conference on Computer Vision and Pattern Recognition, pp. 7842–7851, 2019.
|
328 |
+
|
329 |
+
Ian Jolliffe. *Principal component analysis*. Springer, 2011. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
|
330 |
+
|
331 |
+
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In *ECCV*, 2016.
|
332 |
+
|
333 |
+
Longin Jan Latecki, Aleksandar Lazarevic, and Dragoljub Pokrajac. Outlier detection with kernel density functions. In *International Workshop on Machine Learning and Data Mining in Pattern Recognition*, pp.
|
334 |
+
|
335 |
+
61–75. Springer, 2007.
|
336 |
+
|
337 |
+
Sangmin Lee, Hak Gu Kim, and Yong Man Ro. Stan: Spatio-temporal adversarial networks for abnormal event detection. In 2018 IEEE international conference on acoustics, speech and signal processing
|
338 |
+
(ICASSP), pp. 1323–1327. IEEE, 2018.
|
339 |
+
|
340 |
+
Sangmin Lee, Hak Gu Kim, and Yong Man Ro. Bman: bidirectional multi-scale aggregation networks for abnormal event detection. *IEEE Transactions on Image Processing*, 29:2395–2408, 2019.
|
341 |
+
|
342 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014.
|
343 |
+
|
344 |
+
Wen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao. Future frame prediction for anomaly detection–a new baseline. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 6536–6545, 2018a.
|
345 |
+
|
346 |
+
Yusha Liu, Chun-Liang Li, and Barnabás Póczos. Classifier two-sample test for video anomaly detections.
|
347 |
+
|
348 |
+
In *BMVC*, 2018b.
|
349 |
+
|
350 |
+
Zhian Liu, Yongwei Nie, Chengjiang Long, Qing Zhang, and Guiqing Li. A hybrid video anomaly detection framework via memory-augmented flow reconstruction and flow-guided frame prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 13588–13597, October 2021a.
|
351 |
+
|
352 |
+
Zhian Liu, Yongwei Nie, Chengjiang Long, Qing Zhang, and Guiqing Li. A hybrid video anomaly detection framework via memory-augmented flow reconstruction and flow-guided frame prediction. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pp. 13588–13597, 2021b.
|
353 |
+
|
354 |
+
David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
|
355 |
+
|
356 |
+
Cewu Lu, Jianping Shi, and Jiaya Jia. Abnormal event detection at 150 fps in matlab. In *Proceedings of the* IEEE international conference on computer vision, pp. 2720–2727, 2013.
|
357 |
+
|
358 |
+
Yiwei Lu, K Mahesh Kumar, Seyed shahabeddin Nabavi, and Yang Wang. Future frame prediction using convolutional vrnn for anomaly detection. In 2019 16Th IEEE international conference on advanced video and signal based surveillance (AVSS), pp. 1–8. IEEE, 2019.
|
359 |
+
|
360 |
+
Weixin Luo, Wen Liu, and Shenghua Gao. A revisit of sparse coding based anomaly detection in stacked rnn framework. In *Proceedings of the IEEE international conference on computer vision*, pp. 341–349, 2017a.
|
361 |
+
|
362 |
+
Weixin Luo, Wen Liu, and Shenghua Gao. A revisit of sparse coding based anomaly detection in stacked rnn framework. In *Proceedings of the IEEE international conference on computer vision*, pp. 341–349, 2017b.
|
363 |
+
|
364 |
+
Hui Lv, Chen Chen, Zhen Cui, Chunyan Xu, Yong Li, and Jian Yang. Learning normal dynamics in videos with meta prototype network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15425–15434, June 2021.
|
365 |
+
|
366 |
+
Vijay Mahadevan, Weixin Li, Viral Bhalodia, and Nuno Vasconcelos. Anomaly detection in crowded scenes.
|
367 |
+
|
368 |
+
In *2010 IEEE computer society conference on computer vision and pattern recognition*, pp. 1975–1981.
|
369 |
+
|
370 |
+
IEEE, 2010.
|
371 |
+
|
372 |
+
Amir Markovitz, Gilad Sharir, Itamar Friedman, Lihi Zelnik-Manor, and Shai Avidan. Graph embedded pose clustering for anomaly detection. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 10539–10547, 2020.
|
373 |
+
|
374 |
+
Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. *ICLR*, 2016.
|
375 |
+
|
376 |
+
Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In *European conference on computer vision*, pp. 527–544. Springer, 2016.
|
377 |
+
|
378 |
+
Romero Morais, Vuong Le, Truyen Tran, Budhaditya Saha, Moussa Mansour, and Svetha Venkatesh. Learning regularity in skeleton trajectories for anomaly detection in videos. In *Proceedings of the IEEE/CVF*
|
379 |
+
conference on computer vision and pattern recognition, pp. 11996–12004, 2019.
|
380 |
+
|
381 |
+
Trong-Nguyen Nguyen and Jean Meunier. Anomaly detection in video sequence with appearance-motion correspondence. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 1273–
|
382 |
+
1283, 2019.
|
383 |
+
|
384 |
+
Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, and Haibin Ling. Expanding language-image pretrained models for general video recognition. *arXiv preprint*,
|
385 |
+
2022.
|
386 |
+
|
387 |
+
Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles.
|
388 |
+
|
389 |
+
In *ECCV*, 2016.
|
390 |
+
|
391 |
+
Hyunjong Park, Jongyoun Noh, and Bumsub Ham. Learning memory-guided normality for anomaly detection. *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 14360–
|
392 |
+
14369, 2020.
|
393 |
+
|
394 |
+
Janez Perš, Vildana Sulić, Matej Kristan, Matej Perše, Klemen Polanec, and Stanislav Kovačič. Histograms of optical flow for efficient representation of body motion. *Pattern Recognition Letters*, 31(11):1369–1376, 2010.
|
395 |
+
|
396 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021.
|
397 |
+
|
398 |
+
Tal Reiss and Yedid Hoshen. Mean-shifted contrastive loss for anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 2155–2162, 2023.
|
399 |
+
|
400 |
+
Tal Reiss, Niv Cohen, Liron Bergman, and Yedid Hoshen. Panda: Adapting pretrained features for anomaly detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2806–2814, 2021.
|
401 |
+
|
402 |
+
Nicolae-Cătălin Ristea, Neelu Madan, Radu Tudor Ionescu, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B Moeslund, and Mubarak Shah. Self-supervised predictive convolutional attentive block for anomaly detection. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13576–13586, 2022.
|
403 |
+
|
404 |
+
Royston Rodrigues, Neha Bhargava, Rajbabu Velmurugan, and Subhasis Chaudhuri. Multi-timescale trajectory prediction for abnormal human activity detection. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 2626–2634, 2020.
|
405 |
+
|
406 |
+
Bernhard Scholkopf, Robert C Williamson, Alex J Smola, John Shawe-Taylor, and John C Platt. Support vector method for novelty detection. In *NIPS*, 2000.
|
407 |
+
|
408 |
+
Chenrui Shi, Che Sun, Yuwei Wu, and Yunde Jia. Video anomaly detection via sequentially learning multiple pretext tasks. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*,
|
409 |
+
pp. 10330–10340, October 2023.
|
410 |
+
|
411 |
+
Ashish Singh, Michael J Jones, and Erik G Learned-Miller. Eval: Explainable video anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18717–18726, 2023.
|
412 |
+
|
413 |
+
Waqas Sultani, Chen Chen, and Mubarak Shah. Real-world anomaly detection in surveillance videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6479–6488, 2018.
|
414 |
+
|
415 |
+
Che Sun, Y. Jia, Yao Hu, and Y. Wu. Scene-aware context reasoning for unsupervised abnormal event detection in videos. *Proceedings of the 28th ACM International Conference on Multimedia*, 2020.
|
416 |
+
|
417 |
+
Shengyang Sun and Xiaojin Gong. Hierarchical semantic contrast for scene-aware video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22846–22856, 2023.
|
418 |
+
|
419 |
+
Yao Tang, Lin Zhao, Shanshan Zhang, Chen Gong, Guangyu Li, and Jian Yang. Integrating prediction and reconstruction for anomaly detection. *Pattern Recognition Letters*, 129:123–130, 2020.
|
420 |
+
|
421 |
+
Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. *arXiv preprint arXiv:2203.12602*, 2022.
|
422 |
+
|
423 |
+
Guodong Wang, Yunhong Wang, Jie Qin, Dongming Zhang, Xiuguo Bao, and Di Huang. Video anomaly detection by solving decoupled spatio-temporal jigsaw puzzles. In European Conference on Computer Vision (ECCV), 2022.
|
424 |
+
|
425 |
+
Xuanzhao Wang, Zhengping Che, Bo Jiang, Ning Xiao, Ke Yang, Jian Tang, Jieping Ye, Jingyu Wang, and Qi Qi. Robust unsupervised video anomaly detection by multipath frame prediction. IEEE transactions on neural networks and learning systems, 2021.
|
426 |
+
|
427 |
+
Ziming Wang, Yuexian Zou, and Zeming Zhang. Cluster attention contrast for video anomaly detection.
|
428 |
+
|
429 |
+
Proceedings of the 28th ACM International Conference on Multimedia, 2020.
|
430 |
+
|
431 |
+
Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. Learning and using the arrow of time. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 8052–8060, 2018.
|
432 |
+
|
433 |
+
Peng Wu, jing Liu, Yujia Shi, Yujia Sun, Fangtao Shao, Zhaoyang Wu, and Zhiwei Yang. Not only look, but also listen: Learning multimodal violence detection under weak supervision. In European Conference on Computer Vision (ECCV), 2020.
|
434 |
+
|
435 |
+
Cheng Yan, Shiyu Zhang, Yang Liu, Guansong Pang, and Wenjun Wang. Feature prediction diffusion model for video anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5527–5537, October 2023.
|
436 |
+
|
437 |
+
Zhiwei Yang, Peng Wu, Jing Liu, and Xiaotao Liu. Dynamic local aggregation network with adaptive clusterer for anomaly detection. In *European Conference on Computer Vision*, pp. 404–421. Springer, 2022.
|
438 |
+
|
439 |
+
Zhiwei Yang, Jing Liu, Zhaoyang Wu, Peng Wu, and Xiaotao Liu. Video event restoration based on keyframes for video anomaly detection. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 14592–14601, 2023.
|
440 |
+
|
441 |
+
Muchao Ye, Xiaojiang Peng, Weihao Gan, Wei Wu, and Yu Qiao. Anopcn: Video anomaly detection via deep predictive coding network. In *ACM MM*, 2019.
|
442 |
+
|
443 |
+
Guang Yu, Siqi Wang, Zhiping Cai, En Zhu, Chuanfu Xu, Jianping Yin, and Marius Kloft. Cloze test helps:
|
444 |
+
Effective video anomaly detection via learning to complete video events. In Proceedings of the 28th ACM
|
445 |
+
International Conference on Multimedia, pp. 583–591, 2020.
|
446 |
+
|
447 |
+
Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca:
|
448 |
+
Contrastive captioners are image-text foundation models. *arXiv preprint arXiv:2205.01917*, 2022.
|
449 |
+
|
450 |
+
Shoubin Yu, Zhongyin Zhao, Haoshu Fang, Andong Deng, Haisheng Su, Dongliang Wang, Weihao Gan, Cewu Lu, and Wei Wu. Regularity learning via explicit distribution modeling for skeletal video anomaly detection. *arXiv preprint arXiv:2112.03649*, 2021.
|
451 |
+
|
452 |
+
Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In *ECCV*, 2016.
|
453 |
+
|
454 |
+
Yiru Zhao, Bing Deng, Chen Shen, Yao Liu, Hongtao Lu, and Xian-Sheng Hua. Spatio-temporal autoencoder for video anomaly detection. In *ACM MM*, 2017.
|
455 |
+
|
456 |
+
## A More Qualitative Results
|
457 |
+
|
458 |
+
We provide more qualitative results of our methods over the evaluation datasets.
|
459 |
+
|
460 |
+
In Ped2, Fig. 6 and Fig. 7 demonstrate the effectiveness of our method, which can easily detect fast-moving
|
461 |
+
|
462 |
+
![15_image_0.png](15_image_0.png)
|
463 |
+
|
464 |
+
objects such as trucks and bicycles. Accordingly, we can conclude that Ped2 has been practically solved based on the near-perfect results obtained by our method (as well as many others). Fig. 8 shows that our method is capable of detecting anomalies within a short timeframe. Fig. 9 and Fig. 10 provide more qualitative information regarding our method's ability to detect anomalies of various types. In this way, our method achieves a new state-of-the-art in Avenue and ShanghaiTech, surpassing other approaches by a wide margin.
|
465 |
+
|
466 |
+
Figure 6: Frame-level scores and anomaly localization examples for test video 04 from Ped2. Best viewed in
|
467 |
+
|
468 |
+
![15_image_1.png](15_image_1.png)
|
469 |
+
|
470 |
+
color.
|
471 |
+
|
472 |
+
Figure 7: Frame-level scores and anomaly localization examples for test video 05 from Ped2. Best viewed in color.
|
473 |
+
|
474 |
+
## B More Discussion
|
475 |
+
|
476 |
+
What are the benefits of pretrained features? Previous image anomaly detection work (Reiss et al.,
|
477 |
+
2021) demonstrated that using feature extractors pretrained on external, generic datasets (e.g. ResNet on ImageNet classification) achieves high anomaly detection performance. This was demonstrated on a large variety of datasets across sizes, domains, resolutions, and symmetries. These representations achieved state-of-the-art performance on distant domains, such as aerial, microscopy, and industrial images. As the anomalies in these datasets typically had nothing to do with velocity or human pose, it is clear the pretrained
|
478 |
+
|
479 |
+
![16_image_0.png](16_image_0.png)
|
480 |
+
|
481 |
+
Figure 8: Frame-level scores and anomaly localization examples for test video 03 from Avenue. Best viewed
|
482 |
+
|
483 |
+
![16_image_1.png](16_image_1.png)
|
484 |
+
|
485 |
+
in color.
|
486 |
+
|
487 |
+
Figure 9: Frame-level scores and anomaly localization examples for test video 01_0025 from ShanghaiTech. Best viewed in color.
|
488 |
+
features model many attributes beyond velocity and pose. Consequently, by combining our attribute-based representations with CLIP's image encoder, we are able to emphasize both explicit attributes (velocity and pose) derived from real-world priors and attributes that cannot be described by them, allowing us to achieve the best of both worlds. Why do we use an image encoder instead of a video encoder? Newer and better self-supervised learning methods e.g. TimeSformer (Bertasius et al., 2021), VideoMAE (Tong et al., 2022), X-CLIP (Ni et al., 2022) and CoCa (Yu et al., 2022) are constantly improving the performance of pretrained video encoders on downstream supervised tasks such as Kinetics-400 (Kay et al., 2017). Hence, it is natural to expect that video encoders that utilize both temporal and spatial information will provide a higher level of performance than image encoders that do not. Unfortunately, in preliminary experiments, we found that features extracted by pretrained video encoders did not work as well a pretrained image features on the type of benchmark videos used in VAD. This result underscores the strong generalizability properties of pretrained image encoders, previously highlighted in the context of image anomaly detection. Improving the generalizability of pretrained video features in the one-class classification VAD setting is a promising avenue for future work.
|
489 |
+
|
490 |
+
![17_image_0.png](17_image_0.png)
|
491 |
+
|
492 |
+
Figure 10: Frame-level scores and anomaly localization examples for test video 07_0048 from ShanghaiTech.
|
493 |
+
|
494 |
+
Best viewed in color.
|
XL1N6iLr0G/XL1N6iLr0G_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 18,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 1,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 1,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 18,
|
14 |
+
"code": 0,
|
15 |
+
"table": 4,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 3,
|
18 |
+
"unsuccessful_ocr": 0,
|
19 |
+
"equations": 3
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|